^
CO DO
[<OU_1 66986 ^
Osmania University Library
Call No. "7 ^ ' ' " "* Accession No. G? 3 *>
Author
Title
This book should be returned on or before the date
last marked below.
THEORY OF GAMES
AND ECONOMIC BEHAVIOR
THEORY OF
GAMES
AND ECONOMIC
BEHAVIOR
By JOHN VON NEUMANN, and
OSKAR MORGENSTERN
PRINCETON
PRINCETON UNIVERSITY PRESS
1953
Copyright 1944, by Princeton University Press
PRINTED IN THE UNITED STATES OP AMERICA
Second printing (SECOND EDITION), 1947
Third printing, 1948
Fourth printing, 1950
Fifth printing (THIRD EDITION), 1953
Sixth printing, 1955
LONDON: GEOFFREY CUMBERLEGE OXFORD UNIVERSITY PRESS
PREFACE TO FIRST EDITION
This book contains an exposition and various applications of a mathe
matical theory of games. The theory has been developed by one of us
since 1928 and is now published for the first time in its entirety. The
applications are of two kinds: On the one hand to games in the proper sense,
on the other hand to economic and sociological problems which, as we hope
to show, are best approached from this direction.
The applications which we shall make to games serve at least as much
to corroborate the theory as to investigate these games. The nature of this
reciprocal relationship will become clear as the investigation proceeds.
Our major interest is, of course, in the economic and sociological direction.
Here we can approach only the simplest questions. However, these ques
tions are of a fundamental character. Furthermore, our aim is primarily
to show that there is a rigorous approach to these subjects, involving, as
they do, questions of parallel or opposite interest, perfect or imperfect infor
mation, free rational decision or chance influences.
JOHN VON NEUMANN
OSKAR MORGENSTERN.
PRINCETON, N. J.
January, 1943.
PREFACE TO SECOND EDITION
The second edition differs from the first in some minor respects only.
We have carried out as complete an elimination of misprints as possible, and
wish to thank several readers who have helped us in that respect. We have
added an Appendix containing an axiomatic derivation of numerical utility.
This subject was discussed in considerable detail, but in the main qualita
tively, in Section 3. A publication of this proof in a periodical was promised
in the first edition, but we found it more convenient to add it as an Appendix.
Various Appendices on applications to the theory of location of industries
and on questions of the four and five person games were also planned, but
had to be abandoned because of the pressure of other work.
Since publication of the first edition several papers dealing with the
subject matter of this book have appeared.
The attention of the mathematically interested reader may be drawn
to the following: A. Wald developed a new theory of the foundations of
statistical estimation which is closely related to, and draws on, the theory of
vi PREFACE TO SECOND EDITION
the zerosum twoperson game (" Statistical Decision Functions Which
Minimize the Maximum Risk," Annals of Mathematics, Vol. 46 (1945)
pp. 265280). He also extended the main theorem of the zerosum two
person games (cf. 17.6.) to certain continuousinfinitecases, (" Generalization
of a Theorem by von Neumann Concerning ZeroSum TwoPerson Games,"
Annals of Mathematics, Vol. 46 (1945), pp. 281286.) A new, very simple
and elementary proof of this theorem (which covers also the more general
theorem referred to in footnote 1 on page 154) was given by L. H. Loomis,
("On a Theorem of von Neumann," Proc. Nat. Acad., Vol. 32 (1946) pp. 213
215). Further, interesting results concerning the role of pure and of mixed
strategies in the zerosum twoperson game were obtained by /. Kaplanski,
("A Contribution to von Neumann's Theory of Games," Annals of Mathe
matics, Vol. 46 (1945), pp. 474479). We also intend to come back to vari
ous mathematical aspects of this problem. The group theoretical problem
mentioned in footnote 1 on page 258 was solved by C. Chevalley.
The economically interested reader may find an easier approach to the
problems of this book in the expositions of L. Hururicz, ("The Theory of
Economic Behavior," American Economic Review, Vol. 35 (1945), pp. 909
925) and of J. Marschak ("Neumann's and Morgenstern's New Approach
to Static Economics," Journal of Political Economy, Vol. 54, (1946),
pp. 97115).
JOHN VON NEUMANN
OSKAR MORGENSTERN
PRINCETON, N. J.
September, 1946.
PREFACE TO THIRD EDITION
The Third Edition differs from the Second Edition only in the elimination
of such further misprints as have come to our attention in the meantime,
and we wish to thank several readers who have helped us in that respect.
Since the publication of the Second Edition, the literature on this subject
has increased very considerably. A complete bibliography at this writing
includes several hundred titles. We are therefore not attempting to give
one here. We will only list the following books on this subject:
(1) H. W. Kuhn and A. W. Tucker (eds.), " Contributions to the Theory
of Games, I," Annals of Mathematics Studies, No. 24, Princeton (1950),
containing fifteen articles by thirteen authors.
(2) H. W. Kuhn and A. W. Tucker (eds.), " Contributions to the Theory
of Games, II," Annals of Mathematics Studies, No. 28, Princeton (1953),
containing twentyone articles by twentytwo authors.
(3) J '. McDonald, Strategy in Poker, Business and War, New York
(1950).
(4) J. C. C. McKinsey, Introduction to the Theory of Games, New
York (1952).
(5) A. Wald, Statistical Decision Functions, New York (1950).
(6) J. Williams, The Compleat Strategyst, Being a Primer on the Theory
of Games of Strategy, New York (1953).
Bibliographies on the subject are found in all of the above books except
(6). Extensive work in this field has been done during the last years by the
staff of the RAND Corporation, Santa Monica, California. A bibliography
of this work can be found in the RAND publication RM950.
In the theory of nperson games, there have been some further develop
ments in the direction of " noncooperative" games. In this respect,
particularly the work of J. F. Nash, " Noncooperative Games," Annals of
Mathematics, Vol. 54, (1951), pp. 286295, must be mentioned. Further
references to this work are found in (1), (2), and (4).
Of various developments in economics we mention in particular "linear
programming" and the " assignment problem" which also appear to be
increasingly connected with the theory of games. The reader will find
indications of this $gain in (1), (2), and (4).
The theory of utility suggested in Section 1.3., and in the Appendix to the
Second Edition has undergone considerable development theoretically, as
well as experimentally, and in various discussions. In this connection, the
reader may consult in particular the following:
M. Friedman and L. J. Savage, "The Utility Analysis of Choices Involv
ing Risk," Journal of Political Economy, Vol. 56, (1948), pp. 279304.
vii
viii PREFACE TO THIRD EDITION
J. Marschak, "Rational Behavior, Uncertain Prospects, and Measurable
Utility/' Econometrica, Vol. 18, (1950), pp. 111141.
F. Mosteller and P. Nogee, " An Experimental Measurement of Utility,"
Journal of Political Economy, Vol. 59, (1951), pp. 371404.
M. Friedman and L. J. Savage, "The Expected Utility Hypothesis and
the Measurability of Utility," Journal of Political Economy, Vol. 60,
(1952), pp. 463474.
See also the Symposium on Cardinal Utilities in Econometrica, Vol. 20,
(1952):
H. Wold, "Ordinal Preferences or Cardinal Utility?"
A. S. Manne, "The Strong Independence Assumption Gasoline
Blends and Probability Mixtures."
P. A. Samuelson, "Probability, Utility, and the Independence Axiom."
E. Malinvaud, "Note on von NeumannMorgenstem's Strong Inde
pendence Axiom."
In connection with the methodological critique exercised by some of the
contributors to the lastmentioned symposium, we would like to mention
that we applied the axiomatic method in the customary way with the cus
tomary precautions. Thus the strict, axiomatic treatment of the concept
of utility (in Section 3.6. and in the Appendix) is complemented by an
heuristic preparation (in Sections 3.1.3.5.). The latter's function is to
convey to the reader the viewpoints to evaluate and to circumscribe the
validity of the subsequent axiomatic procedure. In particular our dis
cussion and selection of "natural operations" in those sections covers what
seems to us the relevant substrate of the SamuelsonMalinvaud "inde
pendence axiom."
JOHN VON NEUMANN
OSKAR MORGENSTERN
PRINCETON, N. J.
January, 1953.
TECHNICAL NOTE
The nature of the problems investigated and the techniques employed
in this book necessitate a procedure which in many instances is thoroughly
mathematical. The mathematical devices used are elementary in the sense
that no advanced algebra, or calculus, etc., occurs. (With two, rather unim
portant, exceptions: Part of the discussion of an example in 19.7. et sequ. and
a remark in A.3.3. make use of some simple integrals.) Concepts originating
in set theory, linear geometry and group theory play an important role, but
they are invariably taken from the early chapters of those disciplines and are
moreover analyzed and explained in special expository sections. Neverthe
less the book is not truly elementary because the mathematical deductions
are frequently intricate and the logical possibilities are extensively exploited.
Thus no specific knowledge of any particular body of advanced mathe
matics is required. However, the reader who wants to acquaint himself
more thoroughly with the subject expounded here, will have to familiarize
himself with the mathematical way of reasoning definitely beyond its
routine, primitive phases. The character of the procedures will be mostly
that of mathematical logics, set theory and functional analysis.
We have attempted to present the subject in such a form that a reader
who is moderately versed in mathematics can acquire the necessary practice
in the course of this study. We hope that we have not entirely failed in
this endeavour.
In accordance with this, the presentation is not what it would be in a
strictly mathematical treatise. All definitions and deductions are con
siderably broader than they would be there. Besides, purely verbal dis
cussions and analyses take up a considerable amount of space. We have
in particular tried to give, whenever possible, a parallel verbal exposition
for every major mathematical deduction. It is hoped that this procedure
will elucidate in unmathematical language what the mathematical technique
signifies and will also show where it achieves more than can be done
without it.
In this, as well as in our methodological stand, we are trying to follow
the best examples of theoretical physics.
The reader who is not specifically interested in mathematics should at
first omit those sections of the book which in his judgment are too mathe
matical. We prefer not to give a definite list of them, since this judgment
must necessarily be subjective. However, those sections marked with an
asterisk in the table of contents are most likely to occur to the average reader
in this connection. At any rate he will find that these omissions will little
interfere with the comprehension of the early parts, although the logical
TECHNICAL NOTE
chain may in the rigorous sense have suffered an interruption. As he
proceeds the omissions will gradually assume a more serious character and
the lacunae in the deduction will become more and more significant. The
reader is then advised to start again from the beginning since the greater
familiarity acquired is likely to facilitate a better understanding.
ACKNOWLEDGMENT
The authors wish to express their thanks to Princeton University and to
the Institute for Advanced Study for their generous help which rendered
this publication possible.
They are also greatly indebted to the Princeton University Press which
has made every effort to publish this book in spite of wartime difficulties.
The publisher has shown at all times the greatest understanding for the
authors' wishes.
CONTENTS
PREFACE v
TECHNICAL NOTE ix
ACKNOWLEDGMENT x
CHAPTER I
FORMULATION OF THE ECONOMIC PROBLEM
1. THE MATHEMATICAL METHOD IN ECONOMICS 1
1.1. Introductory remarks 1
1.2. Difficulties of the application of the mathematical method 2
1.3. Necessary limitations of the objectives 6
1.4. Concluding remarks 7
2. QUALITATIVE DISCUSSION OF THE PROBLEM OF RATIONAL BEHAV
IOR 8
2.1. The problem of rational behavior 8
2.2. "Robinson Crusoe" economy and social exchange economy 9
2.3. The number of variables and the number of participants 12
2.4. The case of many participants: Free competition 13
2.5. The "Lausanne" theory 15
3. THE NOTION OF UTILITY 15
3.1. Preferences and utilities 15
3.2. Principles of measurement: Preliminaries 16
3.3. Probability and numerical utilities 17
3.4. Principles of measurement: Detailed discussion 20
3.5. Conceptual structure of the axiomatic treatment of numerical
utilities 24
3.6. The axioms and their interpretation 26
3.7. General remarks concerning the axioms 28
3.8. The role of the concept of marginal utility 29
4. STRUCTURE OF THE THEORY: SOLUTIONS AND STANDARDS OF
BEHAVIOR 31
4.1. The simplest concept of a solution for one participant 31
4.2. Extension to all participants 33
4.3. The solution as a set of imputations 34
4.4. The intransitive notion of "superiority" or "domination" 37
4.5. The precise definition of a solution 39
4.6. Interpretation of our definition in terms of "standards of behavior" 40
4.7. Games and social organizations 43
4.8. Concluding remarks 43
CHAPTER II
GENERAL FORMAL DESCRIPTION OF GAMES OF STRATEGY
5. INTRODUCTION 46
5.1. Shift of emphasis from economics to games 46
5.2. General principles of classification and of procedure 46
CONTENTS
6. THE SIMPLIFIED CONCEPT OF A GAME 48
6.1. Explanation of the termini technici 48
6.2. The elements of the game 49
6.3. Information and preliminary 51
6.4. Preliminarity, transitivity, and signaling 51
7. THE COMPLETE CONCEPT OF A GAME 55
7.1. Variability of the characteristics of each move 55
7.2. The general description 57
8. SETS AND PARTITIONS 60
8.1. Desirability of a settheoretical description of a game 60
8.2. Sets, their properties, and their graphical representation 61
8.3. Partitions, their properties, and their graphical representation 63
8.4. Logistic interpretation of sets and partitions 66
*9. THE SETTHEORETICAL DESCRIPTION OF A GAME 67
*9.1. The partitions which describe a game 67
*9.2. Discussion of these partitions and their properties 7 1
*10. AXIOMATIC FORMULATION 73
*10.1. The axioms and their interpretations 73
*10.2. Logistic discussion of the axioms 76
*10.3. General remarks concerning the axioms 76
*10.4. Graphical representation 77
11. STRATEGIES AND THE FINAL SIMPLIFICATION OF THE DESCRIPTION
OF A GAME 79
11.1. The concept of a strategy and its formalization 79
11.2. The final simplification of the description of a game 81
11.3. The role of strategies in the simplified form of a game 84
11.4. The meaning of the zerosum restriction 84
CHAPTER III
ZEROSUM TWOPERSON GAMES: THEORY
12. PRELIMINARY SURVEY 85
12.1. General viewpoints 85
12.2. The oneperson game 85
12.3. Chance and probability 87
12.4. The next objective 87
13. FUNCTIONAL CALCULUS 88
13.1. Basic definitions 88
13.2. The operations Max and Min 89
13.3. Commutativity questions 91
13.4. The mixed case. Saddle points 93
13.5. Proofs of the main facts 95
14. STRICTLY DETERMINED GAMES 98
14 1. Formulation of the problem 98
14.2. The minorant and the majorant games 100
14.3. Discussion of the auxiliary games 101
CONTENTS
14.4. Conclusions 105
14.5. Analysis of strict determinateness 106
14.6. The interchange of players. Symmetry 109
14.7. Non strictly determined games 110
14.8. Program of a detailed analysis of strict determinateness 111
*15. GAMES WITH PERFECT INFORMATION 112
* 1 5 . 1 . Statement of purpose. Induction 112
*15.2. The exact condition (First step) 114
*15.3. The exact condition (Entire induction) 116
*15.4. Exact discussion of the inductive step 117
*15.5. Exact discussion of the inductive step (Continuation) 120
*15.6. The result in the case of perfect information 123
*15.7. Application to Chess 124
*15.8. The alternative, verbal discussion 126
16. LINEARITY AND CONVEXITY 128
16.1. Geometrical background 1 28
16.2. Vector operations 129
16.3. The theorem of the supporting hyperplanes 134
16.4. The theorem of the alternative for matrices 138
17. MIXED STRATEGIES. THE SOLUTION FOR ALL GAMES 143
17.1. Discussion of two elementary examples 143
17.2. Generalization of this viewpoint 145
17.3. Justification of the procedure as applied to an individual play 146
17.4. The minorant and the majorant games. (For mixed strategies) 149
17.5. General strict determinateness 150
17.6. Proof of the main theorem 153
17.7. Comparison of the treatment by pure and by mixed strategies 155
17.8. Analysis of general strict determinateness 158
17.9. Further characteristics of good strategies 160
17.10. Mistakes and their consequences. Permanent optimality 162
17.11. The interchange of players. Symmetry 165
CHAPTER IV
ZEROSUM TWOPERSON GAMES: EXAMPLES
18. SOME ELEMENTARY GAMES 169
18.1. The simplest games 169
18.2. Detailed quantitative discussion of these games 170
18.3. Qualitative characterizations 173
18.4. Discussion of some specific games. (Generalized forms of Matching
Pennies) 175
18.5. Discussion of some slightly more complicated games 178
18.6. Chance and imperfect information 182
18.7. Interpretation of this result 185
*19. POKER AND BLUFFING 186
*19.1. Description of Poker 186
*19.2. Bluffing 188
*19.3. Description of Poker (Continued) 189
*19.4. Exact formulation of the rules 190
CONTENTS
*19.5. Description of the strategy 191
*19.6. Statement of the problem 195
*19.7. Passage from the discrete to the continuous problem 196
*19.8. Mathematical determination of the solution 199
*19.9. Detailed analysis of the solution 202
*19.10. Interpretation of the solution 204
*19.11. More general forms of Poker 207
19.12. Discrete hands 208
*19.13. m possible bids 209
*19.14. Alternate bidding 211
*19.15. Mathematical description of all solutions 216
* 19. 16. Interpretation of the solutions. Conclusions 218
CHAPTER V
ZEROSUM THREEPERSON GAMES
20. PRELIMINARY SURVEY 220
20.1. General viewpoints 220
20.2. Coalitions 221
21. THE SIMPLE MAJORITY GAME OF THREE PERSONS 222
21.1. Definition of the game 222
21.2. Analysis of the game: Necessity of "understandings" 223
21.3. Analysis of the game: Coalitions. The role of symmetry 224
22. FURTHER EXAMPLES 225
22.1. Unsymmetric distributions. Necessity of compensations 225
22.2. Coalitions of different strength. Discussion 227
22.3. An inequality. Formulae 229
23. THE GENERAL CASE 231
23.1. Detailed discussion. Inessential and essential games 231
23.2. Complete formulae 232
24. DISCUSSION OF AN OBJECTION 233
24.1. The case of perfect information and its significance 233
24.2. Detailed discussion. Necessity of compensations between three or
more players 235
CHAPTER VI
FORMULATION OF THE GENERAL THEORY:
ZEROSUM nPERSON GAMES
25. THE CHARACTERISTIC FUNCTION 238
25.1. Motivation and definition 238
25.2. Discussion of the concept 240
25.3. Fundamental properties 24 1
25.4. Immediate mathematical consequences 242
26. CONSTRUCTION OF A GAME WITH A GIVEN CHARACTERISTIC
FUNCTION 243
26.1. The construction 243
26.2. Summary 245
CONTENTS
27. STRATEGIC EQUIVALENCE. INESSENTIAL AND ESSENTIAL GAMES 245
27.1. Strategic equivalence. The reduced form 245
27.2. Inequalities. The quantity y 248
27.3. Inessentiality and essentiality 249
27.4. Various criteria. Non additive utilities 250
27.5. The inequalities in the essential case 252
27.6. Vector operations on characteristic functions 253
28. GROUPS, SYMMETRY AND FAIRNESS 255
28.1. Permutations, their groups and their effect on a game 255
28.2. Symmetry and fairness 258
29. RECONSIDERATION OF THE ZEROSUM THREEPERSON GAME 260
29.1. Qualitative discussion 260
29.2. Quantitative discussion 262
30. THE EXACT FORM OP THE GENERAL DEFINITIONS 263
30.1. The definitions 263
30.2. Discussion and recapitulation 265
*30.3. The concept of saturation 266
30.4. Three immediate objectives 271
31. FIRST CONSEQUENCES 272
31.1. Convexity, flatness, and some criteria for domination 272
31.2. The system of all imputations. One element solutions 277
31.3. The isomorphism which corresponds to strategic equivalence 281
32. DETERMINATION OF ALL SOLUTIONS OF THE ESSENTIAL ZEROSUM
THREEPERSON GAME 282
32.1. Formulation of the mathematical problem. The graphical method 282
32.2. Determination of all solutions 285
33. CONCLUSIONS 288
33.1. The multiplicity of solutions. Discrimination and its meaning 288
33.2. Statics and dynamics 290
CHAPTER VII
ZEROSUM FOURPERSON GAMES
34. PRELIMINARY SURVEY 291
34.1. General viewpoints 291
34.2. Formalism of the essential zero sum four person games 291
34.3. Permutations of the players 294
35. DISCUSSION OF SOME SPECIAL POINTS IN THE CUBE Q 295
35.1. The corner /. (and V., VI., VII.) 295
35.2. The corner VIII. (and //., ///., 7F.,). The three person game and
a "Dummy" 299
35.3. Some remarks concerning the interior of Q 302
36. DISCUSSION OP THE MAIN DIAGONALS 304
36.1. The part adjacent to the corner VIII.' Heuristic discussion 304
36.2. The part adjacent to the corner VIII. : Exact discussion 307
*36.3. Other parts of the main diagonals 312
CONTENTS
37. THE CENTER AND ITS ENVIRONS 313
37.1. First orientation about the conditions around the center 313
37.2. The two alternatives and the role of symmetry 315
37.3. The first alternative at the center 316
37.4. The second alternative at the center 317
37.5. Comparison of the two central solutions 318
37.6. Unsymmetrical central solutions 319
J8. A FAMILY OP SOLUTIONS FOR A NEIGHBORHOOD OP THE CENTER 321
*38.1. Transformation of the solution belonging to the first alternative at
the center 321
*38.2. Exact discussion 322
*38.3. Interpretation of the solutions 327
CHAPTER VIII
SOME REMARKS CONCERNING n ^ 5 PARTICIPANTS
39. THE NUMBER OF PARAMETERS IN VARIOUS CLASSES OF GAMES 330
39.1. The situation for n  3, 4 330
39.2. The situation for all n ^ 3 330
10. THE SYMMETRIC FIVE PERSON GAME 332
40.1. Formalism of the symmetric five person game 332
40.2. The two extreme cases 332
40.3. Connection between the symmetric five person game and the 1, 2, 3
symmetric four person game 334
CHAPTER IX
COMPOSITION AND DECOMPOSITION OF GAMES
11. COMPOSITION AND DECOMPOSITION 339
41.1. Search for nperson games for which all solutions can be determined 339
41.2. The first type. Composition and decomposition 340
41.3. Exact definitions 341
41.4. Analysis of decomposability 343
41.5. Desirability of a modification 345
12. MODIFICATION OF THE THEORY 345
42.1. No complete abandonment of the zero sum restriction 345
42.2. Strategic equivalence. Constant sum games 346
42.3. The characteristic function in the new theory 348
42.4. Imputations, domination, solutions in the new theory 350
42.5. Essentiality, inessentiality and decomposability in the new theory 351
13. THE DECOMPOSITION PARTITION 353
43.1. Splitting sets. Constituents 353
43.2. Properties of the system of all splitting sets 353
43.3. Characterization of the system of all splitting sets. The decomposi
tion partition 354
43.4. Properties of the decomposition partition 357
14. DECOMPOSABLE GAMES. FURTHER EXTENSION OF THE THEORY 358
44.1. Solutions of a (decomposable) game and solutions of its constituents 358
44.2. Composition and decomposition of imputations and of sets of impu
tations 359
CONTENTS
44.3. Composition and decomposition of solutions. The main possibilities
and surmises 361
44.4. Extension of the theory. Outside sources 363
44.5. The excess 364
44.6. Limitations of the excess. The nonisolated character of a game in
the new setup 366
44.7. Discussion of the new setup. E(e Q ) t F(e ) 367
45. LIMITATIONS OF THE EXCESS. STRUCTURE OF THE EXTENDED
THEORY 368
45.1. The lower limit of the excess 368
45.2. The upper limit of the excess. Detached and fully detached imputa
tions 369
45.3. Discussion of the two limits, ri, r 2 . Their ratio 372
45.4. Detached imputations and various solutions. The theorem con
necting E(e ), F(e ) 375
45.5. Proof of the theorem 376
45.6. Summary and conclusions 380
46. DETERMINATION OF ALL SOLUTIONS OF A DECOMPOSABLE GAME 381
46.1. Elementary properties of decompositions 381
46.2. Decomposition and its relation to the solutions: First results con
cerning F( ) 384
46.3. Continuation 386
46.4. Continuation 388
46.5. The complete result in F(e ) 390
46.6. The complete result in E(e ) 393
46.7. Graphical representation of a part of the result 394
46.8. Interpretation: The normal zone. Heredity of various properties 396
46.9. Dummies 397
46.10. Imbedding of a game 398
46.11. Significance of the normal zone 401
46.12. First occurrence of the phenomenon of transfer: n  6 402
47. THE ESSENTIAL THREEPERSON GAME IN THE NEW THEORY 403
47.1. Need for this discussion 403
47.2. Preparatory considerations 403
47.3. The six cases of the discussion. Cases (I)(III) 406
47.4. Case (IV) : First part 407
47.5. Case (IV): Second part 409
47.6. Case (V) 413
47.7. Case (VI) 415
47.8. Interpretation of the result: The curves (one dimensional parts) in
the solution 416
47.9. Continuation: The areas (two dimensional parts) in the solution 418
CHAPTER X
SIMPLE GAMES
48. WINNING AND LOSING COALITIONS AND GAMES WHERE THEY
OCCUR 420
48.1. The second type of 41.1. Decision by coalitions 420
48.2. Winning and Losing Coalitions 421
CONTENTS
49. CHARACTERIZATION OP THE SIMPLE GAMES 423
49.1. General concepts of winning and losing coalitions 423
'49.2. The special role of one element sets 425
49.3. Characterization of the systems TF, L of actual games 426
49.4. Exact definition of simplicity 428
49.5. Some elementary properties of simplicity 428
49.6. Simple games and their W, L. The Minimal winning coalitions: W m 429
49.7. The solutions of simple g^mes 430
50. THE MAJORITY GAMES AND THE MAIN SOLUTION 431
50.1. Examples of simple games: The majority games 431
50.2. Homogeneity 433
50.3. A more direct use of the concept of imputation in forming solutions 435
50.4. Discussion of this direct approach 436
50.5. Connections with the general theory. Exact formulation 438
50.6. Reformulation of the result 440
50.7. Interpretation of the result 442
50.8. Connection with the Homogeneous Majority game. 443
51. METHODS FOR THE ENUMERATION OF ALL SIMPLE GAMES 445
51.1. Preliminary Remarks 445
51.2. The saturation method: Enumeration by means of W 446
51.3. Reasons for passing from W to W m . Difficulties of using W m 448
51.4. Changed Approach: Enumeration by means of W m 450
51.5. Simplicity and decomposition 452
51.6. Inessentiality, Simplicity and Composition. Treatment of the excess 454
51.7. A criterium of decomposability in terms of W m 455
52. THE SIMPLE GAMES FOR SMALL n 457
52.1. Program, n = 1, 2 play no role. Disposal of n = 3 457
52.2. Procedure for n ^ 4: The two element sets and their role in classify
ing the W m 458
52.3. Decomposability of cases C*, C n _ 2 , C n _i 459
52.4. The simple games other than [1, , 1, n 2]* (with dummies):
The Cases C k , k 0, 1, , n  3 461
52.5. Disposal of n = 4, 5 462
53. THE NEW POSSIBILITIES OF SIMPLE GAMES FOR n ^ 6 463
53.1. The Regularities observed for n ^ 6 463
53.2. The six main counter examples (for n * 6, 7) 464
54. DETERMINATION OF ALL SOLUTIONS IN SUITABLE GAMES 470
54.1. Reasons to consider other solutions than the main solution in simple
games 470
54.2. Enumeration of those games for which all solutions are known 471
54.3. Reasons to consider the simple game [1, , 1, n 2] A 472
*55. THE SIMPLE GAME [1, , 1, n  2] h 473
*55.1. Preliminary Remarks 473
*55.2. Domination. The chief player. Cases (I) and (II) 473
*55.3. Disposal of Case (I) 475
*55.4. Case (II): Determination of Y 478
*55.5. Case (II): Determination of V 481
*55.6. Case (II): a and S* 484
CONTENTS
*55.7. Case (II') and (II"). Disposal of Case (IF) 485
*55.8^ Case (II"): a and V. Domination 487
*55.9. Case (II"): Determination of V 488
*55.10. Disposal of Case (II") 494
*55.11. Reformulation of the complete result 497
*55.12. Interpretation of the result 499
CHAPTER XI
GENERAL NONZEROSUM GAMES
56. EXTENSION OF THE THEORY 504
56.1. Formulation of the problem 504
56.2. The fictitious player. The zero sum extension r 505
56.3. Questions concerning the character of r 506
56.4. Limitations of the use of r 508
56.5. The two possible procedures 510
56.6. The discriminatory solutions 511
56.7. Alternative possibilities 512
56.8. The new setup . 514
56.9. Reconsideration of the case when T is a zero sum game 516
56.10. Analysis of the concept of domination 520
56.11. Rigorous discussion 523
56.12. The new definition of a solution 526
57. THE CHARACTERISTIC FUNCTION AND RELATED TOPICS 527
57.1. The characteristic function: The extended and the restricted form 527
57.2. Fundamental properties 528
57.3. Determination of all characteristic functions 530
57.4. Removable sets of players 533
57.5. Strategic equivalence. Zerosum and constantsum games 535
58. INTERPRETATION OF THE CHARACTERISTIC FUNCTION 538
58.1. Analysis of the definition 538
58.2. The desire to make a gain vs. that to inflict a loss 539
58.3. Discussion 541
59. GENERAL CONSIDERATIONS 542
59.1. Discussion of the program 542
59.2. The reduced forms. The inequalities 543
59.3. Various topics 546
60. THE SOLUTIONS OF ALL GENERAL GAMES WITH n ^ 3 548
60.1. The case n  1 548
60.2. The case n  2 549
60.3. The case n = 3 550
60.4. Comparison with the zero sum games 554
61. ECONOMIC INTERPRETATION OF THE RESULTS FOR n = 1, 2 555
61.1. The case n  1 555
61.2. The case n = 2. The two person market 555
61.3. Discussion of the two person market and its characteristic function 557
61.4. Justification of the standpoint of 58 559
61.5. Divisible goods. The "marginal pairs" 560
61.6. The price. Discussion 562
CONTENTS
62. ECONOMIC INTERPRETATION OF THE RESULTS FOR n = 3: SPECIAL
CASE 564
62.1. The case n = 3, special case. The three person market 564
62.2. Preliminary discussion 566
62.3. The solutions: First subcase 566
62.4. The solutions: General form 569
62.5. Algebraical form of the result 570
62.6. Discussion 571
63. ECONOMIC INTERPRETATION OF THE RESULTS FOR n = 3: GENERAL
CASE 573
63.1. Divisible goods 573
63.2. Analysis of the inequalities 575
63.3. Preliminary discussion 577
63.4. The solutions 577
63.5. Algebraical form of the result 580
63.6. Discussion 581
64. THE GENERAL MARKET 583
64.1. Formulation of the problem 583
64.2. Some special properties. Monopoly and monopsony 584
CHAPTER XII
EXTENSION OF THE CONCEPTS OF DOMINATION
AND SOLUTION
65. THE EXTENSION. SPECIAL CASES 587
65.1. Formulation of the problem 587
65.2. General remarks 588
65.3. Orderings, transitivity, acyclicity 589
65.4. The solutions: For a symmetric relation. For a complete ordering 591
65.5. The solutions: For a partial ordering 592
65.6. Acyclicity and strict acyclicity 594
65.7. The solutions: For an acyclic relation 597
65.8. Uniqueness 'of solutions, acyclicity and strict acyclicity 600
65.9. Application to games: Discreteness and continuity 602
66. GENERALIZATION OF THE CONCEPT OF UTILITY 603
66.1. The generalization. The two phases of the theoretical treatment 603
66.2. Discussion of the first phase 604
66.3. Discussion of the second phase 606
66.4. Desirability of unifying the two phases 607
67. DISCUSSION OF AN EXAMPLE 608
67.1. Description of the example 608
67.2. The solution and its interpretation 611
67.3. Generalization: Different discrete utility scales 614
67.4. Conclusions concerning bargaining 616
APPENDIX: THE AXIOMATIC TREATMENT OF UTILITY 617
INDEX OF FIGURES 633
INDEX OF NAMES 634
INDEX OF SUBJECTS 635
CHAPTER I
FORMULATION OF THE ECONOMIC PROBLEM
1. The Mathematical Method in Economics
1.1. Introductory Remarks
1.1.1. The purpose of this book is to present a discussion of some funda
mental questions of economic theory which require a treatment different
from that which they have found thus far in the literature. The analysis
is concerned with some basic problems arising from a study of economic
behavior which have been the center of attention of economists for a long
time. They have their origin in the attempts to find an exact description
of the endeavor of the individual to obtain a maximum of utility, or, in the
case of the entrepreneur, a maximum of profit. It is well known what
considerable and in fact unsurmounted difficulties this task involves
given even a limited number of typical situations, as, for example, in the
case of the exchange of goods, direct or indirect, between two or more
persons, of bilateral monopoly, of duopoly, of oligopoly, and of free compe
tition. It will be made clear that the structure of these problems, familiar
to every student of economics, is in many respects quite different from the
way in which they are conceived at the present time. It will appear,
furthermore, that their exact positing and subsequent solution can only be
achieved with the aid of mathematical methods which diverge considerably
from the techniques applied by older or by contemporary mathematical
economists.
1.1.2. Our considerations will lead to the application of the mathematical
theory of "games of strategy " developed by one of us in several successive
stages in 1928 and 19401941. 1 After the presentation of this theory, its
application to economic problems in the sense indicated above will be
undertaken. It will appear that it provides a new approach to a number of
economic questions as yet unsettled.
We shall first have to find in which way this theory of games can be
brought into relationship with economic theory, and what their common
elements are. This can be done best by stating briefly the nature of some
fundamental economic problems so that the common elements will be
seen clearly. It will then become apparent that there is not only nothing
artificial in establishing this relationship but that on the contrary this
1 The first phases of this work were published: J. von Neumann, "Zur Theorie der
Gesellschaftsspiele," Math. Annalen, vol. 100 (1928), pp. 295320. The subsequent
completion of the theory, as well as the more detailed elaboration of the considerations
of loc. cit. above, are published here for the first time.
1
2 FORMULATION OF THE ECONOMIC PROBLEM
theory of games of strategy is the proper instrument with which to develop
a theory of economic behavior.
One would misunderstand the intent of our discussions by interpreting
them as merely pointing out an analogy between these two spheres. We
hope to establish satisfactorily, after developing a few plausible schematiza
tions, that the typical problems of economic behavior become strictly
identical with the mathematical notions of suitable games of strategy.
1.2. Difficulties of the Application of the Mathematical Method
1.2.1. It may be opportune to begin with some remarks concerning the
nature of economic theory and to discuss briefly the question of the role
which mathematics may take in its development.
First let us be aware that there exists at present no universal system of
economic theory and that, if one should ever be developed, it will very
probably not be during our lifetime. The reason for this is simply that
economics is far too difficult a science to permit its construction rapidly,
especially in view of the very limited knowledge and imperfect description
of the facts with which economists are dealing. Only those wlio fail to
appreciate this condition are likely to attempt the construction of universal
systems. Even in sciences which are far more advanced than economics,
like physics, there is no universal system available at present.
To continue the simile with physics: It happens occasionally that a
particular physical theory appears to provide the basis for a universal
system, but in all instances up to the present time this appearance has not
lasted more than a decade at best. The everyday work of the research
physicist is certainly not involved with such high aims, but rather is con
cerned with special problems which are " mature." There would probably
be no progress at all in physics if a serious attempt were made to enforce
that superstandard. The physicist works on individual problems, some
of great practical significance, others of less. Unifications of fields which
were formerly divided and far apart may alternate with this type of work.
However, such fortunate occurrences are rare and happen only after each
field has been thoroughly explored. Considering the fact that economics
is much more difficult, much less understood, and undoubtedly in a much
earlier stage of its evolution as a science than physics, one should clearly not
expect more than a development of the above type in economics either.
Second we have to notice that the differences in scientific questions
make it necessary to employ varying methods which may afterwards have
to be discarded if better ones offer themselves. This has a double implica
tion: In some branches of economics the most fruitful work may be that of
careful, patient description; indeed this may be by far the largest domain
for the present and for some time to come. In others it may be possible
to develop already a theory in a strict manner, and for that purpose the
use of mathematics may be required.
THE MATHEMATICAL METHOD IN ECONOMICS 3
Mathematics has actually been used in economic theory, perhaps even
in an exaggerated manner. In any case its use has not been highly suc
cessful. This is contrary to what one observes in other sciences: There
mathematics has been applied with great success, and most sciences could
hardly get along without it. Yet the explanation for this phenomenon is
fairly simple.
1.2.2. It is not that there exists any fundamental reason why mathe
matics should not be used in economics. The arguments often heard that
because of the human element, of the psychological factors etc., or because
there is allegedly no measurement of important factors, mathematics
will find no application, can all be dismissed as utterly mistaken. Almost
all these objections have been made, or might have been made, many
centuries ago in fields where mathematics is now the chief instrument of
analysis. This " might have been" is meant in the following sense: Let
us try to imagine ourselves in the period which preceded the mathematical
or almost mathematical phase of the development in physics, that is the
16th century, or in chemistry and biology, that is the 18th century.
Taking for granted the skeptical attitude of those who object to mathe
matical economics in principle, the outlook in the physical and biological
sciences at these early periods can hardly have been better than that in
economics mutatis mutandis at present.
As to the lack of measurement of the most important factors, the
example of the theory of heat is most instructive ; before the development of
the mathematical theory the possibilities of quantitative measurements
were less favorable there than they are now in economics. The precise
measurements of the quantity and quality of heat (energy and temperature)
were the outcome and not the antecedents of the mathematical theory.
This ought to be contrasted with the fact that the quantitative and exact
notions of prices, money and the rate of interest were already developed
centuries ago.
A further group of objections against quantitative measurements in
economics, centers around the lack of indefinite divisibility of economic
quantities. This is supposedly incompatible with the use of the infini
tesimal calculus and hence ( !) of mathematics. It is hard to see how such
objections can be maintained in view of the atomic theories in physics and
chemistry, the theory of quanta in electrodynamics, etc., and the notorious
and continued success of mathematical analysis within these disciplines.
At this point it is appropriate to mention another familiar argument of
economic literature which may be revived as an objection against the
mathematical procedure.
1.2.3. In order to elucidate the conceptions which we are applying to
economics, we have given and may give again some illustrations from
physics. There are many social scientists who object to the drawing of
such parallels on various grounds, among which is generally found the
assertion that economic theory cannot be modeled after physics since it is a
4 FORMULATION OF THE ECONOMIC PROBLEM
science of social, of human phenomena, has to take psychology into account,
etc. Such statements are at least premature. It is without doubt reason
able to discover what has led to progress in other sciences, and to investigate
whether the application of the same principles may not lead to progress
in economics also. Should the need for the application of different principles
arise, it could be revealed only in the course of the actual development
of economic theory. This would itself constitute a major revolution.
But since most assuredly we have not yet reached such a state and it is
by no means certain that there ever will be need for entirely different
scientific principles it would be very unwise to consider anything else
than the pursuit of our problems in the manner which has resulted in the
establishment of physical science.
1.2.4* The reason why mathematics has not been more successful in
economics must, consequently, be found elsewhere. The lack of real
success is largely due to a combination of unfavorable circumstances, some
of which can be removed gradually. To begin with, the economic problems
were not formulated clearly and are often stated in such vague terms as to
make mathematical treatment a priori appear hopeless because it is quite
uncertain what the problems really are. There is no point in using exact
methods where there is no clarity in the concepts and issues to which they
are to be applied. Consequently the initial task is to clarify the knowledge
of the matter by further careful descriptive work. But even in those
parts of economics where the descriptive problem has been handled more
satisfactorily, mathematical tools have seldom been used appropriately.
They were either inadequately handled, as in the attempts to determine a
general economic equilibrium by the mere counting of numbers of equations
and unknowns, or they led to mere translations from a literary form of
expression into symbols, without any subsequent mathematical analysis.
Next, the empirical background of economic science is definitely inade
quate. Our knowledge of the relevant facts of economics is incomparably
smaller than that commanded in physics at the time when the mathe
matization of that subject was achieved. Indeed, the decisive break which
came in physics in the seventeenth century, specifically in the field of
mechanics, was possible only because of previous developments in astron
omy. It was backed by several millennia of systematic, scientific, astro
nomical observation, culminating in an observer of unparalleled caliber,
Tycho de Brahe. Nothing of this sort has occurred in economic science. It
would have been absurd in physics to expect Kepler and Newton without
Tycho, and there is no reason to hope for an easier development in
economics.
These obvious comments should not be construed, of course, as a
disparagement of statisticaleconomic research which holds the real promise
of progress in the proper direction.
It is due to the combination of the above mentioned circumstances
that mathematical economics has not achieved very much. The underlying
THE MATHEMATICAL METHOD IN ECONOMICS 5
vagueness and ignorance has not been dispelled by the inadequate and
inappropriate use of a powerful instrument that is very difficult to
handle.
In the light of these remarks we may describe our own position as follows:
The aim of this book lies not in the direction of empirical research. The
advancement of that side of economic science, on anything like the scale
which was recognized above as necessary, is clearly a task of vast propor
tions. It may be hoped that as a result of the improvements of scientific
technique and of experience gained in other fields, the development of
descriptive economics will not take as much time as the comparison with
astronomy would suggest. But in any case the task seems to transcend
the limits of any individually planned program.
We shall attempt to utilize only some commonplace experience concern
ing human behavior which lends itself to mathematical treatment and
which is of economic importance.
We believe that the possibility of a mathematical treatment of these
phenomena refutes the "fundamental' 1 objections referred to in 1.2.2.
It will be seen, however, that this process of mathematization is not
at all obvious. Indeed, the objections mentioned above may have their
roots partly in the rather obvious difficulties of any direct mathematical
approach. We shall find it necessary to draw upon techniques of mathe
matics which have not been used heretofore in mathematical economics, and
it is quite possible that further study may result in the future in the creation
of new mathematical disciplines.
To conclude, we may also observe that part of the feeling of dissatisfac
tion with the mathematical treatment of economic theory derives largely
from the fact that frequently one is offered not proofs but mere assertions
which are really no better than the same assertions given in literary form.
Very frequently the proofs are lacking because a mathematical treatment
has been attempted of fields which are so vast and so complicated that for
a long time to come until much more empirical knowledge is acquired
there is hardly any reason at all to expect progress more mathematico.
The fact that these fields have been attacked in this way as for example
the theory of economic fluctuations, the time structure of production, etc.
indicates how much the attendant difficulties are being underestimated.
They are enormous and we are now in no way equipped for them.
1.2.6. We have referred to the nature and the possibilities of those
changes in mathematical technique in fact, in mathematics itself which
a successful application of mathematics to a new subject may produce.
It is important to visualize these in their proper perspective.
It must not be forgotten that these changes may be very considerable.
The decisive phase of the application of mathematics to physics Newton's
creation of a rational discipline of mechanics brought about, and can
hardly be separated from, the discovery of the infinitesimal calculus.
(There are several other examples, but none stronger than this.)
6 FORMULATION OF THE ECONOMIC PROBLEM
The importance of the social phenomena, the wealth and multiplicity
of theii manifestations, and the complexity of their structure, aie at least
equal to those in physics. It is therefore to be expected or feared that
mathematical discoveries of a stature comparable to that of calculus will
be needed in ordei to produce decisive success in this field. (Incidentally,
it is in this spirit that our present efforts must be discounted.) A fortiori
it is unlikely that a mere repetition of the tricks which served us so well in
physics will do for the social phenomena too. The probability is very slim
indeed, since it will be shown that we encounter in our discussions some
mathematical problems which are quite different from those which occur in
physical science.
These observations should be remembered in connection with the current
overemphasis on the use of calculus, differential equations, etc., as the
main tools of mathematical economics.
1.3. Necessary Limitations of the Objectives
1.3.1. We have to return, therefore, to the position indicated earlier:
It is necessary to begin with those problems which are described clearly,
even if they should not be as important from any other point of view. It
should be added, moreover, that a treatment of these manageable problems
may lead to results which are already fairly well known, but the exact
proofs may nevertheless be lacking. Before they have been given the
respective theory simply does not exist as a scientific theory. The move
ments of the planets were known long before their courses had been calcu
lated and explained by Newton's theory, and the same applies in many
smaller and less dramatic instances. And similarly in economic theory,
certain results say the indeterminateness of bilateral monopoly may be
known already. Yet it is of interest to derive them again from an exact
theory. The same could and should be said concerning practically all
established economic theorems.
1.3.2. It might be added finally that we do not propose to raise the
question of the practical significance of the problems treated. This falls
in line with what was said above about the selection of fields for theory.
The situation is not different here from that in other sciences. There too
the most important questions from a practical point of view may have been
completely out of reach during long and fruitful periods of their develop
ment. This is certainly still the case in economics, where it is of utmost
importance to know how to stabilize employment, how to increase the
national income, or how to distribute it adequately. Nobody can really
answer these questions, and we need not concern ourselves with the pre
tension that there can be scientific answers at present.
The great progress in every science came when, in the study of problems
which were modest as compared with ultimate aims, methods were devel
oped which could be extended further and furthei. The free fall is a very
trivial physical phenomenon, but it was the study of this exceedingly simple
THE MATHEMATICAL METHOD IN ECONOMICS 7
fact and its comparison with the astronomical material, which brought forth
mechanics.
It seems to us that the same standard of modesty should be applied in
economics. It is futile to try to explain and " systematically 1 ' at that
everything economic. The sound procedure is to obtain first utmost
precision and mastery in a limited field, and then to proceed to another, some
what wider one, and so on. This would also do away with the unhealthy
practice of applying socalled theories to economic or social reform where
they are in no way useful.
We believe that it is necessary to know as much as possible about the
behavior of the individual and about the simplest forms of exchange. This
standpoint was actually adopted with remarkable success by the founders
of the marginal utility school, but nevertheless it is not generally accepted.
Economists frequently point to much larger, more " burning " questions, and
brush everything aside which prevents them from making statements
about these. The experience of more advanced sciences, for example
physics, indicates that this impatience merely delays progress, including
that of the treatment of the " burning " questions. There is no reason to
assume the existence of shortcuts.
1.4. Concluding Remarks
1.4. It is essential to realize that economists can expect no easier fate
than that which befell scientists in other disciplines. It seems reasonable
to expect that they will have to take up first problems contained in the very
simplest facts of economic life and try to establish theories which explain
them and which really conform to rigorous scientific standards. We can
have enough confidence that from then on the science of economics will
grow further, gradually comprising matters of more vital impoitance than
those with which one has to begin. 1
The field covered in this book is very limited, and we approach it in
this sense of modesty. We do not worry at all if the results of oui study
conform with views gained recently or held for a long time, for what is
important is the gradual development of a theory, based on a careful
analysis of the ordinary everyday interpretation of economic facts. This
preliminary stage is necessarily heuristic, i.e. the phase of transition from
unmathematical plausibility considerations to the formal procedure of
mathematics. The theory finally obtained must be mathematically rigor
ous and conceptually general. Its first applications are necessarily to
elementary problems where the result has never been in doubt and no
theory is actually required. At this early stage the application selves to
corroborate the theory. The next stage develops when the theory is applied
1 The beginning is actually of a certain significance, because the forms of exchange
between a few individuals are the same as those observed on some of the most important
markets of modern industry, or in the case of barter exchange between states in inter
national trade.
8 FORlVgJLATION OF THE ECONOMIC PROBLEM
to somewhat more complicated situations in which it may already lead to a
certain extent beyond the obvious and the familiar. Here theory and
application corroborate each other mutually. Beyond this lies the field of
real success: genuine prediction by theory. It is well known that all
mathematized sciences have gone through these successive phases of
evolution.
2. Qualitative Discussion of the Problem of Rational Behavior
2.1. The Problem of Rational Behavior
2.1.1. The subject matter of economic theory is the very complicated
mechanism of prices and production, and of the gaining and spending of
incomes. In the course of the development of economics it has been
found, and it is now wellnigh universally agreed, that an approach to this
vast problem is gained by the analysis of the behavior of the individuals
which constitute the economic community. This analysis has been pushed
fairly far in many respects, and while there still exists much disagreement
the significance of the approach cannot be doubted, no matter how great
its difficulties may be. The obstacles are indeed considerable, even if the
investigation should at first be limited to conditions of economics statics, as
they well must be. One of the chief difficulties lies in properly describing
the assumptions which have to be made about the motives of the individual.
This problem has been stated traditionally by assuming that the consumer
desires to obtain a maximum of utility or satisfaction and the entrepreneur
a maximum of profits.
The conceptual and practical difficulties of the notion of utility, and
particularly of the attempts to describe it as a number, are well known and
their treatment is not among the primary objectives of this work. We shall
nevertheless be forced to discuss them in some instances, in particular in
3.3. and 3.5. Let it be said at once that the standpoint of the present book
on this very important and very interesting question will be mainly oppor
tunistic. We wish to concentrate on one problem which is not that of
the measurement of utilities and of preferences and we shall therefore
attempt to simplify all other characteristics as far as reasonably possible.
We shall therefore assume that the aim of all participants in the economic
system, consumers as well as entrepreneurs, is money, or equivalently a
single monetary commodity. This is supposed to be unrestrictedly divisible
and substitutable, freely transferable and identical, even in the quantitative
sense, with whatever " satisfaction" or " utility" is desired by each par
ticipant. (For the quantitative character of utility, cf. 3.3. quoted above.)
It is sometimes claimed in economic literature that discussions of the
notions of utility and preference are altogether unnecessary, since these are
purely verbal definitions with no empirically observable consequences, i.e.,
entirely tautological. It does not seem to us that these notions are quali
tatively inferior to certain well established and indispensable notions in
THE PROBLEM OF RATIONAL BEHAVIOR 9
physics, like force, mass, charge, etc. That is, while they are in their
immediate form merely definitions, they become subject to empirical control
through the theories which are built upon them and in no other way.
Thus the notion of utility is raised above the status of a tautology by such
economic theories as make use of it and the results of which can be compared
with experience or at least with common sense.
2.1.2. The individual who attempts to obtain these respective maxima
is also said to act "rationally." But it may safely be stated that there
exists, at present, no satisfactory treatment of the question of rational
behavior. There may, for example, exist several ways by which to reach
the optimum position; they may depend upon the knowledge and under
standing which the individual has and upon the paths of action open to
him. A study of all these questions in qualitative terms will not exhaust
them, because they imply, as must be evident, quantitative relationships.
It would, therefore, be necessary to formulate them in quantitative terms
so that all the elements of the qualitative description are taken into con
sideration. This is an exceedingly difficult task, and we can safely say
that it has not been accomplished in the extensive literature about the
topic. The chief reason for this lies, no doubt, in the failure to develop
and apply suitable mathematical methods to the problem; this would
have revealed that the maximum problem which is supposed to correspond
to the notion of rationality is not at all formulated in an unambiguous way.
Indeed, a more exhaustive analysis (to be given in 4.S.4.5.) reveals that
the significant relationships are much more complicated than the popular
and the " philosophical" use of the word " rational" indicates.
A valuable qualitative preliminary description of the behavior of the
individual is offered by the Austrian School, particularly in analyzing the
economy of the isolated " Robinson Crusoe." We may have occasion to
note also some considerations of BohmBawerk concerning the exchange
between two or more persons. The more recent exposition of the theory of
the individual's choices in the form of indifference curve analysis builds up
on the very same facts or alleged facts but uses a method which is often held
to be superior in many ways. Concerning this we refer to the discussions in
2.1.1. and 3.3.
We hope, however, to obtain a real understanding of the problem of
exchange by studying it from an altogether different angle; this is, from the
perspective of a "game of strategy." Our approach will become clear
presently, especially after some ideas which have been advanced, say by
BohmBawerk whose views may be considered only as a prototype of this
theory are given correct quantitative formulation.
2.2. "Robinson Crusoe" Economy and Social Exchange Economy
2.2.1. Let us look more closely at the type of economy which is repre
sented by the "Robinson Crusoe" model, that is an economy of an isolated
single person or otherwise organized under a single will. This economy is
10 FORMULATION OF THE ECONOMIC PROBLEM
confronted with certain quantities of commodities and a number of wants
which they may satisfy. The problem is to obtain a maximum satisfaction.
This is considering in particular our above assumption of the numerical
character of utility indeed an ordinary maximum problem, its difficulty
depending appaiently on the number of variables and on the nature of the
function to be maximized; but this is more of a practical difficulty than a
theoretical one. 1 If one abstracts from continuous production and from
the fact that consumption too stretches over time (and often uses durable
consumers' goods), one obtains the simplest possible model. It was
thought possible to use it as the very basis for economic theory, but this
attempt notably a feature of the Austrian version was often contested.
The chief objection against using this very simplified model of an isolated
individual for the theory of a social exchange economy is that it does not
represent an individual exposed to the manifold social influences. Hence,
it is said to analyze an individual who might behave quite differently if his
choices were made in a social world where he would be exposed to factors
of imitation, advertising, custom, and so on. These factors certainly make
a great difference, but it is to be questioned whether they change the formal
properties of the process of maximizing. Indeed the latter has never been
implied, and since we are concerned with this problem alone, we can leave
the above social considerations out of account.
Some other differences between " Crusoe " and a participant in a social
exchange economy will not concern us either. Such is the nonexistence of
money as a means of exchange in the first case where there is only a standard
of calculation, for which purpose any commodity can serve. This difficulty
indeed has been ploughed under by our assuming in 2.1.2. a quantitative
and even monetary notion of utility. We emphasize again: Our interest
lies in the fact that even after all these drastic simplifications Crusoe is
confronted with a formal problem quite different from the one a participant
in a social economy faces.
2.2.2. Crusoe is given certain physical data (wants and commodities)
and his task is to combine and apply them in such a fashion as to obtain
a maximum resulting satisfaction. There can be no doubt that he controls
exclusively all the variables upon which this result depends say the
allotting of resources, the determination of the uses of the same commodity
for different wants, etc. 2
Thus Crusoe faces an ordinary maximum problem, the difficulties of
which are of a purely technical and not conceptual nature, as pointed out.
2.2.3. Consider now a participant in a social exchange economy. His
problem has, of course, many elements in common with a maximum prob
1 It is not important for the following to determine whether its theory is complete in
all its aspects.
2 Sometimes uncontrollable factors also intervene, e.g. the weather in agriculture.
These however are purely statistical phenomena. Consequently they can be eliminated
by the known procedures of the calculus of probabilities: i.e., by determining the prob
abilities of the various alternatives and by introduction of the notion of " mathematical
expectation." Cf . however the influence on the notion of utility, discussed in 3.3.
THE PROBLEM OF RATIONAL BEHAVIOR 11
lem. But it also contains some, very essential, elements of an entirely
different nature. He too tries to obtain an optimum result. But in order
to achieve this, he must enter into relations of exchange with others. If
two or more persons exchange goods with each other, then the result for
each one will depend in general not merely upon his own actions but on
those of the others as well. Thus each participant attempts to maximize
a function (his abovementioned "result") of which he does not control all
variables. This is certainly no maximum problem, but a peculiar and dis~
concerting mixture of several conflicting maximum problems. Every parti
cipant is guided by another principle and neither determines all variables
which affect his interest.
This kind of problem is nowhere dealt with in classical mathematics.
We emphasize at the risk of being pedantic that this is no conditional maxi
mum problem, no problem of the calculus of variations, of functional
analysis, etc. It arises in full clarity, even in the most " elementary"
situations, e.g., when all variables can assume only a finite number of values.
A particularly striking expression of the popular misunderstanding
about this pseudomaximum problem is the famous statement according to
which the purpose of social effort is the "greatest possible good for the
greatest possible number. " A guiding principle cannot be formulated
by the requirement of maximizing two (or more) functions at once.
Such a principle, taken literally, is selfcontradictory, (in general one
function will have no maximum where the other function has one.) It is
no better than saying, e.g., that a firm should obtain maximum prices
at maximum turnover, or a maximum revenue at minimum outlay. If
some order of importance of these principles or some weighted average is
meant, this should be stated. However, in the situation of the participants
in a social economy nothing of that sort is intended, but all maxima are
desired at once by various participants.
One would be mistaken to believe that it can be obviated, like the
difficulty in the Crusoe case mentioned in footnote 2 on p. 10, by a mere
recourse to the devices of the theory of probability. Every participant can
determine the variables which describe his own actions but not those of the
others. Nevertheless those "alien " variables cannot, from his point of view,
be described by statistical assumptions. This is because the others are
guided, just as he himself, by rational principles whatever that may mean
and no modus procedendi can be correct which does not attempt to under
stand those principles and the interactions of the conflicting interests of all
participants.
Sometimes some of these interests run more 01 less parallel then we
are nearer to a simple maximum problem. But they can just as well be
opposed. The general theory must cover all these possibilities, all inter
mediary stages, and all their combinations.
2.2.4. The difference between Crusoe's perspective and that of a par
ticipant in a social economy can also be illustrated in this way: Apart from
12 FORMULATION OF THE ECONOMIC PROBLEM
those variables which his will controls, Crusoe is given a number of data
which are "dead"; they are the unalterable physical background of the
situation. (Even when they are apparently variable, cf. footnote 2 on
p. 10, they are really governed by fixed statistical laws.) Not a single
datum with which he has to deal reflects another person's will or intention
of an economic kind based on motives of the same nature as his own. A
participant in a social exchange economy, on the other hand, faces data
of this last type as well : they are the product of other participants' actions
and volitions (like prices). His actions will be influenced by his expectation
of these, and they in turn reflect the other participants' expectation of his
actions.
Thus the study of the Crusoe economy and the use of the methods
applicable to it, is of much more limited value to economic theory than
has been assumed heretofore even by the most radical critics. The grounds
for this limitation lie not in the field of those social relationships which
we have mentioned before although we do not question their significance
but rather they arise from the conceptual differences between the original
(Crusoe's) maximum problem and the more complex problem sketched above.
We hope that the reader will be convinced by the above that we face
here and now a really conceptual and not merely technical difficulty.
And it is this problem which the theory of " games of strategy" is mainly
devised to meet.
2.3. The Number of Variables and the Number of Participants
2.3.1. The formal setup which we used in the preceding paragraphs to
indicate the events in a social exchange economy made use of a number of
" variables" which described the actions of the participants in this economy.
Thus every participant is allotted a set of variables, "his" variables, which
together completely describe his actions, i.e. express precisely the manifes
tations of his will. We call these sets the partial sets of variables. The
partial sets of all participants constitute together the set of all variables, to
be called the total set. So the total number of variables is determined first
by the number of participants, i.e. of partial sets, and second by the number
of variables in every partial set.
From a purely mathematical point of view there would be nothing
objectionable in treating all the variables of any one partial set as a single
variable, "the" variable of the participant corresponding to this partial
set. Indeed, this is a procedure which we are going to use frequently in
our mathematical discussions; it makes absolutely no difference con
ceptually, and it simplifies notations considerably.
For the moment, however, we propose to distinguish from each other the
variables within each partial set. The economic models to which one is
naturally led suggest that procedure; thus it is desirable to describe for
every participant the quantity of every particular good he wishes to acquire
by a separate variable, etc.
THE PROBLEM OF RATIONAL BEHAVIOR 13
2.3.2. Now we must emphasize that any increase of the number of
variables inside a participant's partial set may complicate our problem
technically, but only technically. Thus in a Crusoe economy where
there exists only one participant and only one partial set which then coin
cides with the total set this may make the necessary determination of a
maximum technically more difficult, but it will not alter the "pure maxi
mum " character of the problem. If, on the other hand, the number of
participants i.e., of the partial sets of variables is increased, something
of a very different nature happens. To use a terminology which will turn
out to be significant, that of games, this amounts to an increase in the
number of players in the game. However, to take the simplest cases, a
threeperson game is very fundamentally different from a twoperson game,
a fourperson game from a threeperson game, etc. The combinatorial
complications of the problem which is, as we saw, no maximum problem
at all increase tremendously with every increase in the number of players,
as our subsequent discussions will amply show.
We have gone into this matter in such detail particularly because in
most models of economics a peculiar mixture of these two phenomena occurs.
Whenever the number of players, i.e. of participants in a social economy,
increases, the complexity of the economic system usually increases too;
e.g. the number of commodities and services exchanged, processes of
production used, etc. Thus the number of variables in every participant's
partial set is likely to increase. But the number of participants, i.e. of
partial sets, has increased too. Thus both of the sources which we discussed
contribute pari passu to the total increase in the number of variables. It is
essential to visualize each source in its proper role.
2.4. The Case of Many Participants : Free Competition
2.4.1. In elaborating the contrast between a Crusoe economy and a
social exchange economy in 2.2.2.2.2.4., we emphasized those features
of the latter which become more prominent when the number of participants
while greater than 1 is of moderate size. The fact that every partici
pant is influenced by the anticipated reactions of the others to his own
measures, and that this is true for each of the participants, is most strikingly
the crux of the matter (as far as the sellers are concerned) in the classical
problems of duopoly, oligopoly, etc. When the number of participants
becomes really great, some hope emerges that the influence of every par
ticular participant will become negligible, and that the above difficulties
may recede and a more conventional theory become possible. These
are, of course, the classical conditions of "free competition." Indeed, this
was the starting point of much of what is best in economic theory. Com
pared with this case of great numbers free competition the cases of small
numbers on the side of the sellers monopoly, duopoly, oligopoly were
even considered to be exceptions and abnormities. (Even in these cases
the number of participants is still very large in view of the competition
14 FORMULATION OF THE ECONOMIC PROBLEM
among the buyers. The cases involving really small numbers are those of
bilateral monopoly, of exchange between a monopoly and an oligopoly, or
two oligopolies, etc.)
2.4.2. In all fairness to the traditional point of view this much ought
to be said: It is a well known phenomenon in many branches of the exact
and physical sciences that very great numbers are often easier to handle
than those of medium size. An almost exact theory of a gas, containing
about 10 26 freely moving particles, is incomparably easier than that of the
solar system, made up of 9 major bodies; and still more than that of a mul
tiple star of three or four objects of about the same size. This is, of course,
due to the excellent possibility of applying the laws of statistics and prob
abilities in the first case.
This analogy, however, is far from perfect for our problem. The theory
of mechanics for 2, 3, 4, bodies is well known, and in its general
theoretical (as distinguished from its special and computational) form is the
foundation of the statistical theory for great numbers. For the social
exchange economy i.e. for the equivalent " games of strategy " the theory
of 2, 3, 4, participants was heretofore lacking. It is this need that
our previous discussions were designed to establish and that our subsequent
investigations will endeavor to satisfy. In other words, only after the
theory for moderate numbers of participants has been satisfactorily devel
oped will it be possible to decide whether extremely great numbers of par
ticipants simplify the situation. Let us say it again: We share the hope
chiefly because of the abovementioned analogy in other fields! that such
simplifications will indeed occur. The current assertions concerning free
competition appear to be very valuable surmises and inspiring anticipations
of results. But they are not results and it is scientifically unsound to treat
them as such as long as the conditions which we mentioned above are not
satisfied.
There exists in the literature a considerable amount of theoretical dis
cussion purporting to show that the zones of indeterminateness (of rates of
exchange) which undoubtedly exist when the number of participants is
small narrow and disappear as the number increases. This then would
provide a continuous transition into the ideal case of free competition for
a very great number of participants where all solutions would be sharply
and uniquely determined. While it is to be hoped that this indeed turns out
to be the case in sufficient generality, one cannot concede that anything
like this contention has been established conclusively thus far. There is
no getting away from it: The problem must be formulated, solved and
understood for small numbers of participants before anything can be proved
about the changes of its character in any limiting case of large numbers,
such as free competition.
2.4.3. A really fundamental reopening of this subject is the more
desirable because it is neither certain nor probable that a mere increase in
the number of participants will always lead in fine to the conditions of
THE NOTION OF UTILITY 15
free competition. The classical definitions of free competition all involve
further postulates besides the greatness of that number. E.g., it is clear
that if certain great groups of participants will for any reason whatsoever
act together, then the great number of participants may not become
effective; the decisive exchanges may take place directly between large
" coalitions, " l few in number, and not between individuals, many in number,
acting independently. Our subsequent discussion of " games of strategy"
will show that the role and size of " coalitions " is decisive throughout the
entire subject. Consequently the above difficulty though not new still
remains the crucial problem. Any satisfactory theory of the " limiting
transition " from small numbers of participants to large numbers will have
to explain under what circumstances such big coalitions will or will not be
formed i.e. when the large numbers of participants will become effective
and lead to a more or less free competition. Which of these alternatives is
likely to arise will depend on the physical data of the situation. Answering
this question is, we think, the real challenge to any theory of free competition.
2.5. The "Lausanne" Theory
2.6. This section should not be concluded without a reference to the
equilibrium theory of the Lausanne School and also of various other systems
which take into consideration " individual planning " and interlocking
individual plans. All these systems pay attention to the interdependence
of the participants in a social economy. This, however, is invariably done
under farreaching restrictions. Sometimes free competition is assumed,
after the introduction of which the participants face fixed conditions and
act like a number of Robinson Crusoes solely bent on maximizing their
individual satisfactions, which under these conditions are again independent.
In other cases other restricting devices are used, all of which amount to
excluding the free play of " coalitions " formed by any or all types of par
ticipants. There are frequently definite, but sometimes hidden, assump
tions concerning the ways in which their partly parallel and partly opposite
interests will influence the participants, and cause them to cooperate or not,
as the case may be. We hope we have shown that such a procedure amounts
to a petitio principii at least on the plane on which we should like to put
the discussion. It avoids the real difficulty and deals with a verbal problem,
which is not the empirically given one. Of course we do not wish to ques
tion the significance of these investigations but they do not answer our
queries.
3. The Notion of Utility
3.1. Preferences and Utilities
3.1.1. We have stated already in 2.1.1. in what way we wish to describe
the fundamental concept of individual preferences by the use of a rather
1 Such as trade unions, consumers' cooperatives, industrial cartels, and conceivably
some organizations more in the political sphere.
16 FORMULATION OF THE ECONOMIC PROBLEM
farreaching notion of utility. Many economists will feel that we are
assuming far too much (cf . the enumeration of the properties we postulated
in 2.1.1.), and that our standpoint is a retrogression from the more cautious
modern technique of " indifference curves."
Before attempting any specific discussion let us state as a general
excuse that our procedure at worst is only the application of a classical
preliminary device of scientific analysis: To divide the difficulties, i.e. to
concentrate on one (the subject proper of the investigation in hand), and
to reduce all others as far as reasonably possible, by simplifying and schema
tizing assumptions. We should also add that this high handed treatment
of preferences and utilities is employed in the main body of our discussion,
but we shall incidentally investigate to a certain extent the changes which an
avoidance of the assumptions in question would cause in our theory (cf. 66.,
67.).
We feel, however, that one part of our assumptions at least that of
treating utilities as numerically measurable quantities is not quite as
radical as is often assumed in the literature. We shall attempt to prove
this particular point in the paragraphs which follow. It is hoped that the
reader will forgive us for discussing only incidentally in a condensed form
a subject of so great a conceptual importance as that of utility. It seems
however that even a few remarks may be helpful, because the question
of the measurability of utilities is similar in character to corresponding
questions in the physical sciences.
3.1.2. Historically, utility was fiist conceived as quantitatively measur
able, i.e. as a number. Valid objections can be and have been made against
this view in its original, naive form. It is clear that every measurement
or rather every claim of measurability must ultimately be based on some
immediate sensation, which possibly cannot and certainly need not be
analyzed any further. 1 In the case of utility the immediate sensation of
preference of one object or aggregate of objects as against another
provides this basis. But this permits us only to say when for one person
one utility is greater than another. It is not in itself a basis for numerical
comparison of utilities for one person nor of any comparison between
different persons. Since there is no intuitively significant way to add two
iUtilities for the same person, the assumption that utilities are of non
Jnumerical character even seems plausible. The modern method of indiffer
ence curve analysis is a mathematical procedure to describe this situation.
3.2. Principles of Measurement : Preliminaries
3.2.1. All this is strongly reminiscent of the conditions existant at the
beginning of the theory of heat: that too was based on the intuitively clear
concept of one body feeling warmer than another, yet there was no immedi
ate way to express significantly by how much, or how many times, or in
what sense.
1 Such as the sensations of light, heat, muscular effort, etc., in the corresponding
branches of physics.
THE NOTION OF UTILITY 17
This comparison with heat also shows how little one can forecast a priori
what the ultimate shape of such a theory will be. The above crude indica
tions do not disclose at all what, as we now know, subsequently happened.
It turned out that heat permits quantitative description not by one numbei
but by two: the quantity of heat and temperature. The former is rather
directly numerical because it turned out to be additive and also in an
unexpected way connected with mechanical energy which was numerical
anyhow. The latter is also numerical, but in a much more subtle way;
it is not additive in any immediate sense, but a rigid numerical scale for it
emerged from the study of the concordant behavior of ideal gases, and the
role of absolute temperature in connection with the entropy theorem.
3.2.2. The historical development of the theory of heat indicates that
one must be extremely careful in making negative assertions about any
concept with the claim to finality. Even if utilities look very unnumerical
today, the history of the experience in the theory of heat may repeat itself,
and nobody can foretell with what ramifications and variations. 1 And it
should certainly not discourage theoretical explanations of the formal
possibilities of a numerical utility.
3.3. Probability and Numerical Utilities
3.3.1. We can go even one step beyond the above double negations
which were only cautions against premature assertions of the impossibility
of a numerical utility. It can be shown that under the conditions on which
the indifference curve analysis is based very little extra effort is needed to
reach a numerical utility.
It has been pointed out repeatedly that a numerical utility is dependent
upon the possibility of comparing differences in utilities. This may seem
and indeed is a more farreaching assumption than that of a mere ability
to state preferences. But it will seem that the alternatives to which eco
nomic preferences must be applied are such as to obliterate this distinction.
3.3.2. ILet us for the moment accept the picture of an individual whose
system ol preferences is allembracmg^mdeemplete, i^e^who^for any two
objects or rather for any two imagined events, possesses a clear intuition of
preference.
More precisely we expect him, for any two alternative events which are
put before him as possibilities, to be able to tell which of the two he prefers.
It is a very natural extension of this picture to permit such an individual
to compare not only events, but even combinations of events with stated
probabilities. 2
By a combination of two events we mean this: Let the two events be
denoted by B and C and use, for the sake of simplicity, the probability
1 A good example of the wide variety of formal possibilities is given by the entirely
different development of the theory of light, colors, and wave lengths. All these notions
too became numerical, but in an entirely different way.
2 Indeed this is necessary if he is engaged in economic activities which are explicitly
dependent on probability. Of. the example of agriculture in footnote 2 on p. 10.
18 FORMULATION OF THE ECONOMIC PROBLEM
50%50%. Then the " combination" is the prospect of seeing B occur
with a probability of 50% and (if B does not occur) C with the (remaining)
probability of 50%. We stress that the two alternatives are mutually
exclusive, so that no possibility of complementarity and the like exists.
Also, that an absolute certainty of the occurrence of either B or C exists.
To restate our position. We expect the individual under consideration
to possess a clear intuition whether he prefers the event A to the 5050
combination of B or C, or conversely. It is clear that if he prefers A to B
and also to C, then he will prefer it to the above combination as well;
similarly, if he prefers B as well as C to A, then he will prefei the combination
too. But if he should prefer A to, say B, but at the same time C to A, then
any assertion about his preference of A against the combination contains
fundamentally new information. Specifically: If he now prefers A to the
5050 combination of B and C, this provides a plausible base for the numer
ical estimate that his preference of A over B is in excess of his preference of
C over A. 1  2
If this standpoint is accepted, then there is a criterion with which to
compare the preference of C over A with the preference of A over B. It is
well known that thereby utilities or rather differences of utilities become
numerically measurable.
That the possibility of comparison between A, B, and C only to this
extent is already sufficient for a numerical measurement of " distances "
was first observed in economics by Pareto. Exactly the same argument
has been made, however, by Euclid for the position of points on a line in
fact it is the very basis of his classical derivation of numerical distances.
The introduction of numerical measures can be achieved even more
directly if use is made of all possible probabilities. Indeed: Consider
three events, C, A, B, for which the order of the individual's preferences
is the one stated. Let a be a real number between and 1, such that A
is exactly equally desirable with the combined event consisting of a chance
of probability 1 a for B and the remaining chance of probability a. for C.
Then we suggest the use of a as a numerical estimate for the ratio of the
preference of A over B to that of C over B. 8 An exact and exhaustive
1 To give a simple example: Assume that an individual prefers the consumption of a
glass of tea to that of a cup of coffee, and the cup of coffee to a glass of milk. If we now
want to know whether the last preference i.e., difference in utilities exceeds the former,
it suffices to place him in a situation where he must decide this: Does he prefer a cup of
coffee to a glass the content of which will be determined by a 50 %50 % chance device as
tea or milk.
1 Observe that we have only postulated an individual intuition which permits decision
as to which of two "events" is preferable. But we have not directly postulated any
intuitive estimate of the relative sizes of two preferences i.e. in the subsequent termi
nology, of two differences of utilities.
This is important, since the former information ought to be obtainable in a reproduci
ble way by mere "questioning."
1 This offers a good opportunity for another illustrative example. The above tech
nique permits a direct determination of the ratio q of the utility of possessing 1 unit of a
certain good to the utility of possessing 2 units of the same good. The individual must
THE NOTION OF UTILITY 19
elaboration of these ideas requires the use of the axiomatic method. A sim
ple treatment on this basis is indeed possible. We shall discuss it in
3.53.7.
3.3.3. To avoid misunderstandings let us state that the " events"
which were used above as the substratum of preferences are conceived as
future events so as to make all logically possible alternatives equally
admissible. However, it would be an unnecessary complication, as far
as our present objectives are concerned, to get entangled with the problems
of the preferences between events in different periods of the future. 1 It
seems, however, that such difficulties can be obviated by locating all
"events" in which we are interested at one and the same, standardized,
moment, preferably in the immediate future.
The above considerations are so vitally dependent upon the numerical
concept of probability that a few words concerning the latter may be
appropriate.
Probability has often been visualized as a subjective concept more
or less in the nature of an estimation. Since we propose to use it in con
structing an individual, numerical estimation of utility, the above view of
probability would not serve our purpose. The simplest procedure is, there
fore, to insist upon the alternative, perfectly well founded interpretation of
probability as frequency in long runs. This gives directly the necessary
numerical foothold. 2
3.3.4. This procedure for a numerical measurement of the utilities of the
individual depends, of course, upon the hypothesis of completeness in the
system of individual preferences. 8 It is conceivable and may even in a
way be more realistic to allow for cases where the individual is neither
able to state which of two alternatives he prefers nor that they are equally
desirable. In this case the treatment by indifference curves becomes
impracticable too. 4
How real this possibility is, both for individuals and for organizations,
seems to be an extremely interesting question, but it is a question of fact.
It certainly deserves further study. We shall reconsider it briefly in 3.7.2.
At any rate we hope we have shown that the treatment by indifference
curves implies either too much or too little: if the preferences of the indi
be given the choice of obtaining 1 unit with certainty or of playing the chance to get two
units with the probability <*, or nothing with the probability 1 a. If he prefers the
former, then a < 5; if he prefers the latter, then a > g; if he cannot state a preference
either way, then a = q.
1 It is well known that this presents very interesting, but as yet extremely obscure,
connections with the theory of saving and interest, etc.
2 If one objects to the frequency interpretation of probability then the two concepts
(probability and preference) can be axiomatized together. This too leads to a satis
factory numerical concept of utility which will be discussed on another occasion.
8 We have not obtained any basis for a comparison, quantitatively or qualitatively,
of the utilities of different individuals.
4 These problems belong systematically in the mathematical theory of ordered sets.
The above question in particular amounts to asking whether events, with respect to
preference, form a completely or a partially ordered set. Cf. 65,3.
20 FORMULATION OF THE ECONOMIC PROBLEM
vidual are not all comparable, then the indifference curves do not exist. 1
If the individual's preferences are all comparable, then we can even obtain a
(uniquely defined) numeiical utility which renders the indifference curves
superfluous.
All this becomes, of course, pointless for the entrepreneur who can
calculate in terms of (monetary) costs and profits.
3.3.5. The objection could be raised that it is not necessary to go into
all these intricate details concerning the measurability of utility, since
evidently the common individual, whose behavior one wants to describe,
does not measure his utilities exactly but rather conducts his economic
activities in a sphere of considerable haziness. The same is true, of course,
for much of his conduct regarding light, heat, muscular effort, etc. But in
order to build a science of physics these phenomena had to be measured.
And subsequently the individual has come to use the results of such measure
ments directly or indirectly even in his everyday life. The same may
obtain in economics at a future date. Once a fuller understanding of
economic behavior has been achieved with the aid of a theory which makes
use of this instrument, the life of the individual might be materially affected.
It is, therefore, not an unnecessary digression to study these problems.
3.4. Principles of Measurement : Detailed Discussion
3.4.1. The reader may feel, on the basis of the foregoing, that we
obtained a numerical scale of utility only by begging the principle, i.e. by
really postulating the existence of such a scale. We have argued in 3.3.2.
that if an individual prefers A to the 5050 combination of B and C (while
preferring C to A and A to JB), this provides a plausible basis for the numer
ical estimate that this preference of A over B exceeds that of C over A.
Are we not postulating here or taking it for granted that one preference
may exceed another, i.e. that such statements convey a meaning? Such
a view would be a complete misunderstanding of our procedure.
3.4.2. We are not postulating or assuming anything of the kind. We
have assumed only one thing and for this there is good empirical evidence
namely that imagined events can be combined with probabilities. And
therefore the same must be assumed for the utilities attached to them,
whatever they may be. Or to put it in more mathematical language:
There frequently appear in science quantities which are a priori not
mathematical, but attached to certain aspects of the physical world.
Occasionally these quantities can be grouped together in domains within
which certain natural, physically defined operations are possible. Thus
the physically defined quantity of "mass" permits the operation of addition.
The physicogeometrically defined quantity of "distance" 2 permits the same
1 Points on the same indifference curve must be identified and are therefore no
instances of incomparability.
f Let us, for the sake of the argument, view geometry as a physical discipline, a
sufficiently tenable viewpoint. By "geometry" we mean equally for the sake of the
argument Euclidean geometry.
THE NOTION OF UTILITY 21
operation. On the other hand, the physicogeometrically defined quantity
of " position" does not permit this operation, 1 but it permits the operation
of forming the " center of gravity" of two positions. 2 Again other physico
geometrical concepts, usually styled " vectorial" like velocity and accelera
tion permit the operation of " addition."
3.4.3. In all these cases where such a " natural" operation is given a
name which is reminiscent of a mathematical operation like the instances
of " addition" above one must carefully avoid misunderstandings. This
nomenclature is not intended as a claim that the two operations with the
same name are identical, this is manifestly not the case; it only expresses
the opinion that they possess similar traits, and the hope that some cor
respondence between them will ultimately be established. This of course
when feasible at all is done by finding a mathematical model for the
physical domain in question, within which those quantities are defined by
numbers, so that in the model the mathematical operation describes the
synonymous " natural" operation.
To return to our examples: " energy" and "mass" became numbers in
the pertinent mathematical models, "natural" addition becoming ordinary
addition. "Position" as well as the vectorial quantities became triplets 3 of
numbers, called coordinates or components respectively. The "natural"
concept of "center of gravity" of two positions {#1, x%, x 3 ) and \x' ly x' 2 , z'a}/
with the "masses" a, 1 a (cf. footnote 2 above), becomes
{ax, + (1  a)x(, ax, + (1  a)*J, ax, + (1  <*X). 5
The "natural" operation of "addition" of vectors {zi, x 2 , x*\ and [x(, z, x' z \
becomes {xi + x[, x 2 + x 2 , x* + ZgJ. 6
What was said above about "natural" and mathematical operations
applies equally to natural and mathematical relations. The various con
cepts of "greater" which occur in physics greater energy, force, heat,
velocity, etc. are good examples.
These "natural" relations are the best base upon which to construct
mathematical models and to correlate the physical domain with them. 7 ' 8
1 We are thinking of a "homogeneous" Euclidean space, in which no origin or frame of
reference is preferred above any other.
2 With respect to two given masses a, occupying those positions. It may be con
venient to normalize so that the total mass is the unit, i.e. ** 1 *.
3 We are thinking of threedimensional Euclidean space.
4 We are now describing them by their three numerical coordinates.
8 This is usually denoted by a (xi,z 2 ,z 8 1 + (1  a)js, xj, *',) Cf. (16:A:c) in 16.2.1.
8 This is usually denoted by (xi, x*, x s \ f (z'i, zj, xj. Cf. the beginning of 16.2.1.
7 Not the only one. Temperature is a good counterexample. The "natural" rela
tion of "greater," would not have sufficed to establish the present day mathematical
model, i.e. the absolute temperature scale. The devices actually used were different.
Cf. 3.2.1.
8 We do not want to give the misleading impression of attempting here a complete
picture of the formation of mathematical models, i.e. of physical theories. It should be
remembered that this is a very varied process with many unexpected phases. An impor
tant one is, e.g., the disentanglement of concepts: i.e. splitting up something which at
22 FORMULATION OF THE ECONOMIC PROBLEM
3.4.4. Here a further remark must be made. Assume that a satisfactory
mathematical model for a physical domain in the above sense has been
found, and that the physical quantities under consideration have been
correlated with numbers. In this case it is not true necessarily that the
description (of the mathematical model) provides for a unique way of
correlating the physical quantities to numbers; i.e., it may specify an entire
family of such correlations the mathematical name is mappings any
one of which can be used for the purposes of the theory. Passage from one
of these correlations to another amounts to a transformation of the numerical
data describing the physical quantities. We then say that in this theory
the physical quantities in question are described by numbers up to that
system of transformations. The mathematical name of such transformation
systems is groups. 1
Examples of such situations are numerous. Thus the geometrical con
cept of distance is a number, up to multiplication by (positive) constant
factors. 2 The situation concerning the physical quantity of mass is the
same. The physical concept of energy is a number up to any linear trans
formation, i.e. addition of any constant and multiplication by any (posi
tive) constant. 8 The concept of position is defined up to an inhomogeneous
orthogonal linear transformation. 4  B The vectorial concepts are defined
up to homogeneous tiansformations of the same kind. 5 ' 6
3.4.6. It is even conceivable that a physical quantity is a number up to
any monotone transformation. This is the case for quantities for which
only a "natural" relation " greater " exists and nothing else. E.g. this
was the case for temperature as long as only the concept of " warmer " was
known; 7 it applies to the Mohs' scale of hardness of minerals; it applies to
superficial inspection seems to be one physical entity into several mathematical notions.
Thus the "disentanglement" of force and energy, of quantity of heat and temperature,
were decisive in their respective fields.
It is quite unforeseeable how many such differentiations still lie ahead in economic
theory.
1 We shall encounter groups in another context in 28.1.1, where references to the
literature are also found.
* I.e. there is nothing in Euclidean geometry to fix a unit of distance.
3 I.e. there is nothing in mechanics to fix a zero or a unit of energy. Cf . with footnote 2
above. Distance has a natural zero, the distance of any point from itself.
4 I.e. *i, x, X are to be replaced by {xi*, x a *, x 9 *\ where
f OU.TI 4 013X3 + 61,
*i*  0*1X1 f 022X1 f 023X3 f 62,
^i*  031X1 f as2X S + 033X3 + &3,
the a</, bi being constants, and the matrix (a,/) what is known as orthogonal.
I.e. there is nothing in geometry to fix either origin or the frame of reference when
positions are concerned; and nothing to fix the frame of reference when vectors are
concerned.
f I.e. the bi in footnote 4 above. Sometimes a wider concept of matrices is
permissible, all those with determinants ^ 0. We need not discuss these matters here.
' But no quantitatively reproducible method of thermometry .
THE NOTION OF UTILITY 23
the notion of utility when this is based on the conventional idea of prefer
ence. In these cases one may be tempted to take the view that the quantity
in question is not numerical at all, considering how arbitrary the description
by numbers is. It seems to be preferable, however, to refrain from such
qualitative statements and to state instead objectively up to what system
of transformations the numerical description is determined. The case
when the system consists of all monotone transformations is, of course, a
rather extreme one; various graduations at the other end of the scale are
the transformation systems mentioned above: inhomogeneous or homo
geneous orthogonal linear transformations in space, linear transformations
of one numerical variable, multiplication of that variable by a constant. 1
In fine, the case even occurs where the numerical description is absolutely
rigorous, i.e. where no transformations at all need be tolerated. 2
3.4.6. Given a physical quantity, the system of transformations up to
which it is described by numbers may vary in time, i.e. with the stage of
development of the subject. Thus temperature was originally a number
only up to any monotone transformation. 8 With the development of
thermometry particularly of the concordant ideal gas thermometry the
transformations were restricted to the linear ones, i.e. only the absolute
zero and the absolute unit were missing. Subsequent developments of
thermodynamics even fixed the absolute zero so that the transformation
system in thermodynamics consists only of the multiplication by constants.
Examples could be multiplied but there seems to be no need to go into this
subject further.
For utility the situation seems to be of a similar nature. One may
take the attitude that the only "natural" datum in this domain is the
relation "greater," i.e. the concept of preference. In this case utilities are
numerical up to a monotone transformation. This is, indeed, the generally
accepted standpoint in economic literature, best expressed in the technique
of indifference curves.
To narrow the system of transformations it would be necessary to dis
cover further "natural" operations or relations in the domain of utility.
Thus it was pointed out by Pareto 4 that an equality relation for utility
differences would suffice; in our terminology it would reduce the transfor
mation system to the linear transformations. 6 However, since it does not
1 One could also imagine intermediate cases of greater transformation systems than
these but not containing all monotone transformations. Various forms of the theory of
relativity give rather technical examples of this.
2 In the usual language this would hold for physical quantities where an absolute zero
as well as an absolute unit can be defined. This is, e.g., the case for the absolute value
(not the vector!) of velocity in such physical theories as those in which light velocity
plays a normative role: Maxwellian electrodynamics, special relativity.
8 As long as only the concept of " warmer" i.e. a "natural" relation "greater" was
known. We discussed this in extenao previously.
4 V. Pareto, Manuel d'Economie Politique, Paris, 1907, p. 264.
'This is exactly what Euclid did for position on a line. The utility concept of
" preference " corresponds to the relation of " lying to the right of " there, and the (desired)
relation of the equality of utility differences to the geometrical congruence of intervals.
24 FORMULATION OF THE ECONOMIC PROBLEM
seem that this relation is really a "natural" one i.e. one which can be
interpreted by reproducible observations the suggestion does not achieve
the purpose.
3.6. Conceptual Structure of the Axiomatic Treatment of Numerical Utilities
3.6.1. The failure of one particular device need not exclude the possibility
of achieving the same end by another device. Our contention is that the
domain of utility contains a "natural" operation which narrows the system
of transformations to precisely the same extent as the other device would
have done. This is the combination of two utilities with two given alterna
tive probabilities a, 1 a, (0 < a < 1) as described in 3.3.2. The
process is so similar to the formation of centers of gravity mentioned in
3.4.3. that it may be advantageous to use the same terminology. Thus
we have for utilities u, v the "natural" relation u > v (read: u is preferable
to v), and the "natural" operation an + (1 a)v, (0 < a < 1), (read:
center of gravity of u, v with the respective weights a, 1 a; or: combina
tion of u, v with the alternative probabilities ,! ). If the existence
and reproducible observability of these concepts is conceded, then our
way is clear: We must find a correspondence between utilities and numbers
which carries the relation u > v and the operation au + (1 a)v for
utilities into the synonymous concepts for numbers.
Denote the correspondence by
u > p = v(w),
u being the utility and v(u) the number which the correspondence attaches
to it. Our requirements are then:
(3:l:a) u > v implies v(u) > v(v),
(3:l:b) v(au + (1  a)v) = av(u) + (1  a)v(y). 1
If two such correspondences
(3:2:a) u+p = v(u),
(3:2:b) u  p' = v'(u),
should exist, then they set up a correspondence between numbers
(3:3) p+ P ',
for which we may also write
(3:4) P '
Since (3:2:a), (3:2:b) fulfill (3:1 :a), (3:1 :b), the correspondence (3:3), i.e.
the function 0(p) in (3:4) must leave the relation p > cr 2 and the operation
^Observe that in in each case the lefthand side has the "natural" concepts for
utilities, and the righthand side the conventional ones for numbers.
1 Now these are applied to numbers p, ol
THE NOTION OF UTILITY 25
ap + (1 )<r unaffected (cf footnote 1 on p. 24). I.e.
(3:5:a) p > <r implies <f>(p) > <(<r),
(3:5:b) <t>(ap + (1  a)r) = a*(p) + (1  a)0(<r).
Hence </>(p) must be a linear function, i.e.
(3:6) p' = <(p) es o> p + i,
where w , i are fixed numbers (constants) with w > 0.
So we see: If such a numerical valuation of utilities 1 exists at all, then
it is determined up to a linear transformation. 2 ' 8 I.e. then utility is a
number up to a linear transformation.
In order that a numerical valuation in the above sense should exist it
is necessary to postulate certain properties of the relation u > v and the
operation au + (1 ct)v for utilities. The selection of these postulates
or axioms and their subsequent analysis leads to problems of a certain
mathematical interest. In what follows we give a general outline of the
situation for the orientation of the reader; a complete discussion is found in
the Appendix.
3.5.2. A choice of axioms is not a purely objective task. It is usually
expected to achieve some definite aim some specific theorem or theorems
are to be derivable from the axioms and to this extent the problem is
exact and objective. But beyond this there are always other important
desiderata of a less exact nature: The axioms should not be too numerous,
their system is to be as simple and transparent as possible, and each axiom
should have an immediate intuitive meaning by which its appropriateness
may be judged directly. 4 In a situation like ours this last requirement is
particularly vital, in spite of its vagueness: we want to make an intuitive
concept amenable to mathematical treatment and to see as clearly as
possible what hypotheses this requires.
The objective part of our problem is clear: the postulates must imply
the existence of a correspondence (3:2:a) with the properties (3:l:a),
(3:l:b) as described in 3.5.1. The further heuristic, and even esthetic
desiderata, indicated above, do not determine a unique way of finding
this axiomatic treatment. In what follows we shall formulate a set of
axioms which seems to be essentially satisfactory.
1 I.e. a correspondence (3:2:a) which fulfills (3:1 :a), (3:1 :b).
8 I.e. one of the form (3:6).
3 Remember the physical examples of the same situation given in 3.4.4. (Our present
discussion is somewhat more detailed.) We do not undertake to fix an absolute zero
and an absolute unit of utility.
4 The first and the last principle may represent at least to a certain extent opposite
influences: If we reduce the number of axioms by merging them as far as technically
possible, we may lose the possibility of distinguishing the various intuitive backgrounds.
Thus we could have expressed the group (3:B) in 3.6.1. by a smaller number of axioms,
but this would have obscured the subsequent analysis of 3.6.2.
To strike a proper balance is a matter of practical and to some extent even esthetic
judgment.
26 FORMULATION OF THE ECONOMIC PROBLEM
3.6. The Axioms and Their Interpretation
3.6.1. Our axioms are these:
We consider a system U of entities 1 u, v, w, . In V a relation is
given, u > v, and for any number a, (0 < a < 1), an operation
au + (1 a)v = w.
These concepts satisfy the following axioms:
(3: A) u > v is a complete ordering of f/. 2
This means: Write u < v when v > u. Then:
(3:A:a) For any two u y v one and only one of the three following
relations holds:
u = v t u > v, u < v.
(3:A:b) u > v, v > w imply u > w. z
(3:B) Ordering and combining. 4
(3:B:a) u < v implies that u < au + (1 a)v.
(3:B:b) u > v implies that u > au + (1 a)v.
(3:B:c) u < w < v implies the existence of an a with
au + (1 a)v < w.
(3:B:d) u > w > v implies the existence of an a with
au + (1 a)v > w.
(3:C) Algebra of combining.
(3:C:a) au + (1  a)v = (1  a)v + au.
(3:C:b) a(ftu + (1  fiv) + (1  a)v = yu + (1  y)v
where 7 = aft.
One can show that these axioms imply the existence of a correspondence
(3:2:a) with the properties (3:1 :a), (3:1 :b) as described in 3.5.1. Hence
the conclusions of 3.5.1. hold good: The system U i.e. in our present
interpretation, the system of (abstract) utilities is one of numbers up to
a linear transformation.
The construction of (3:2:a) (with (3:1 :a), (3:1 :b) by means of the
axioms (3:A)(3:C)) is a purely mathematical task which is somewhat
lengthy, although it runs along conventional lines and presents no par
1 This is, of course, meant to be the system of (abstract) utilities, to be characterized
by our axioms. Concerning the general nature of the axiomatic method, cf. the remarks
and references in the last part of 10.1.1.
* For a more systematic mathematical discussion of this notion, cf. 65.3.1. The
equivalent concept of the completeness of the system of preferences was previously con
sidered at the beginning of 3.3.2. and of 3.4.6.
8 These conditions (3:A:a), (3: Aft) correspond to (65:A:a), (65:A:b) in 65.3.1.
4 Remember that the a, 0, y occurring here are always > 0, < 1.
THE NOTION OF UTILITY 27
ticular difficulties. (Cf. Appendix.)
It seems equally unnecessary to carry out the usual logistic discussion
of these axioms 1 on this occasion.
We shall however say a few more words about the intuitive meaning
i.e. the justification of each one of our axioms (3:A)(3:C).
3.6.2. The analysis of our postulates follows:
(3:A:a*) This is the statement of the completeness of the system of
individual preferences. It is customary to assume this when
discussing utilities or preferences, e.g. in the " indifference curve
analysis method." These questions were already considered in
3.3.4. and 3.4.6.
(3:A:b*) This is the " transitivity " of preference, a plausible and
generally accepted property.
(3:B:a*) We state here: If v is preferable to u, then even a chance
1 a of v alternatively to u is preferable. This is legitimate
since any kind of complementarity (or the opposite) has been
excluded, cf. the beginning of 3.3.2.
(3:B:b*) This is the dual of (3:B:a*), with "less preferable" in place of
" preferable."
(3:B:c*) We state here: If w is preferable to u, and an even more
preferable v is also given, then the combination of u with a
chance 1 a of v will not affect w'& preferability to it if this
chance is small enough. I.e.: However desirable v may be in
itself, one can make its influence as weak as desired by giving
it a sufficiently small chance. This is a plausible " continuity"
assumption.
(3 :B :d*) This is the dual of (3 :B :c*), with "less preferable " in place. of
"preferable."
(3:C:a*) This is the statement that it is irrelevant in which order the
constituents u, v of a combination are named. It is legitimate,
particularly since the constituents are alternative events, cf.
(3:B:a*) above.
(3:C:b*) This is the statement that it is irrelevant whether a com
bination of two constituents is obtained in two successive
steps, first the probabilities a, 1 a, then the probabilities 0,
1 /J; or in one operation, the probabilities 7, 1 y where
7 = a. 2 The same things can be said for this as for (3:C:a*)
above. It may be, however, that this postulate has a deeper
significance, to which one allusion is made in 3.7.1. below.
1 A similar situation is dealt with more exhaustively in 10.; those axioms describe a
subject which is more vital for our main objective. The logistic discussion is indicated
there in 10.2. Some of the general remarks of 10.3. apply to the present case also.
2 This is of course the correct arithmetic of accounting for two successive admixtures
of v with u.
28 FORMULATION OF THE ECONOMIC PROBLEM
3.7. General Remarks Concerning the Axioms
3.7.1. At this point it may be well to stop and to reconsider the situa
tion. Have we not shown too much? We can derive from the postulates
(3:A)(3:C) the numerical character of utility in the sense of (3:2:a) and
(3:1 :a), (3:1 :b) in 3.5.1.; and (3:1 :b) states that the numerical values of
utility combine (with probabilities) like mathematical expectations! And
yet the concept of mathematical expectation has been often questioned,
and its legitimateness is certainly dependent upon some hypothesis con
cerning the nature of an " expectation." 1 Have we not then begged the
question? Do not our postulates introduce, in some oblique way, the
hypotheses which bring in the mathematical expectation?
More specifically: May there not exist in an individual a (positive or
negative) utility of the mere act of " taking a chance," of gambling, wRich
the use of the mathematical expectation obliterates?
How did our axioms (3:A)(3:C) get around this possibility?
As far as we can see, our postulates (3:A)(3:C) do not attempt to avoid
it. Even that one which gets closest to excluding a "utility of gambling"
(3:C:b) (cf. its discussion in 3.6.2.), seems to be plausible and legitimate,
unless a much more refined system of psychology is used than the one now
available for the purposes of economics. The fact that a numerical utility
with a formula amounting to the use of mathematical expectations can
be built upon (3:A)(3:C), seems to indicate this: We have practically
defined numerical utility as being that thing for which the calculus of
mathematical expectations is legitimate. 2 Since (3:A)(3:C) secure that
the necessary construction can be carried out, concepts like a " specific
utility of gambling" cannot be formulated free of contradiction on this
level. 3
3.7.2. As we have stated, the last time in 3.6.1., our axioms are based
on the relation u > v and on the operation au + (1 a)v for utilities.
It seems noteworthy that the latter may be regarded as more immediately
given than the former: One can hardly doubt that anybody who could
imagine two alternative situations with the respective utilities u, v could
not also conceive the prospect of having both with the given respective
probabilities ,! . On the other hand one may question the postulate
of axiom (3:A:a) for u > v, i.e. the completeness of this ordering.
Let us consider this point for a moment. We have conceded that one
may doubt whether a person^can always decide which of two alternatives
1 Cf. Karl Menger: Das Unsicherheitsmoment in der Wertlehre, Zeitschrift ftir
National6konomie, vol. 5, (1934) pp. 459ff. and Gerhard Tintner: A contribution to the
nonstatic Theory of Choice, Quarterly Journal of Economics, vol. LVI, (1942) pp. 274ff.
1 Thus Daniel Bernoulli's well known suggestion to "solve" the "St. Petersburg
Paradox" by the use of the socalled "moral expectation" (instead of the mathematical
expectation) means defining the utility numerically as the logarithm of one's monetary
possessions.
9 This may seem to be a paradoxical assertion. But anybody who has seriously tried
to axiomatize that elusive concept, will probably concur with it.
THE NOTION OF UTILITY 29
with the utilities u, v he prefers. 1 But, whatever the merits of this
doubt are, this possibility i.e. the completeness of the system of (indi
vidual) preferences must be assumed even for the purposes of the "indiffer
ence curve method" (cf. our remarks on (3:A:a) in 3.6.2.). But if this
property of u > v 2 is assumed, then our use of the much less questionable
au + (1 ot)v * yields the numerical utilities too! 4
If the general comparability assumption is not made, 5 a mathematical
theory based on au + (1 <x)v together with what remains of u > v
is still possible. 6 It leads to what may be described as a manydimensional
vector concept of utility. This is a more complicated and less satisfactory
setup, but we do not propose to treat it systematically at this time.
3.7.3. This brief exposition does not claim to exhaust the subject, but
we hope to have conveyed the essential points. To avoid misunderstand
ings, the following further remarks may be useful.
(1) We reemphasize that we are considering only utilities experienced
by one person. These considerations do not imply anything concerning the
comparisons of the utilities belonging to different individuals.
(2) It cannot be denied that the analysis of the methods which make use
of mathematical expectation (cf. footnote 1 on p. 28 for the literature) is
far from concluded at present. Our remarks in 3.7.1. lie in this direction,
but much more should be said in this respect. There are many interesting
questions involved, which however lie beyond the scope of this work.
For our purposes it suffices to observe that the validity of the simple and
plausible axioms (3:A)(3:C) in 3.6.1. for the relation u > v and the oper
ation au + (1 a)v makes the utilities numbers up to a linear transforma
tion in the sense discussed in these sections.
3.8. The Role of the Concept of Marginal Utility .
3.8.1. The preceding analysis made it clear that we feel free to make
use of a numerical conception of utility. On the other hand, subsequent
1 Or that he can assert that they are precisely equally desirable.
2 I.e. the completeness postulate (3:A:a).
1 I.e. the postulates (3 :B), (3:C) together with the obvious postulate (3:A:b).
4 At this point the reader may recall the familiar argument according to which the
unnumerical ("indifference curve") treatment of utilities is preferable to any numerical
one, because it is simpler and based on fewer hypotheses. This objection might be
legitimate if the numerical treatment were based on Pareto's equality relation for utility
differences (cf. the end of 3.4.6.). This relation is, indeed, a stronger and more compli
cated hypothesis, added to the original ones concerning the general comparability of
utilities (completeness of preferences).
However, we used the operation au + (1 ) instead, and we hope that the reader
will agree with us that it represents an even safer assumption than that of the complete
ness of preferences.
We think therefore that our procedure, as distinguished from Pareto's, is not open
to the objections based on the necessity of artificial assumptions and a loss of simplicity.
6 This amounts to weakening (3:A:a) to an (3:A:a') by replacing in it "one and only
one" by "at most one/' The conditions (3:A:a') (3:A:b) then correspond to (65:B:a),
(65:B:b).
6 In this case some modifications in the groups of postulates (3:B), (3:0) are also
necessary.
30 FORMULATION OF THE ECONOMIC PROBLEM
discussions will show that we cannot avoid the assumption that all subjects
of the economy under consideration are completely informed about the
physical characteristics of the situation in which they operate and are able
to perform all statistical, mathematical, etc., operations which this knowl
edge makes possible. The nature and importance of this assumption has
been given extensive attention in the literature and the subject is probably
very far from being exhausted. We propose not to enter upon it. The
question is too vast and too difficult and we believe that it is best to " divide
difficulties." I.e. we wish to avoid this complication which, while interest
ing in its own right, should be considered separately from our present
problem.
Actually we think that our investigations although they assume
" complete information" without any further discussion do make a con
tribution to the study of this subject. It will be seen that many economic
and social phenomena which are usually ascribed to the individual's state of
" incomplete information" make their appearance in our theory and can be
satisfactorily interpreted with its help. Since our theory assumes " com
plete information," we conclude from this that those phenomena have
nothing to do with the individual's " incomplete information." Some
particularly striking examples of this will be found in the concepts of
" discrimination" in 33.1., of "incomplete exploitation" in 38.3., and of the
"transfer" or "tribute" in 46.11., 46.12.
On the basis of the above we would even venture to question the impor
tance usually ascribed to incomplete information in its conventional sense 1
in economic and social theory. It will appear that some phenomena which
would prima facie have to be attributed to this factor, have nothing to do
with it. 2
3.8.2. Let us now consider an isolated individual with definite physical
characteristics and with definite quantities of goods at his disposal. In
view of what was said above, he is in a position to determine the maximum
utility which can be obtained in this situation. Since the maximum is a
welldefined quantity, the same is true for the increase which occurs when a
unit of any definite good is added to the stock of all goods in the possession
of the individual. This is, of course, the classical notion of the marginal
utility of a unit of the commodity in question. 8
These quantities are clearly of decisive importance in the "Robinson
Crusoe" economy. The above marginal utility obviously corresponds to
1 We shall see that the rules of the games considered may explicitly prescribe that
certain participants should not possess certain pieces of information. Cf. 6.3., 6.4.
(Games in which this does not happen are referred to in 14.8. and in (15:B) of 15.3.2., and
are called games with " perfect information.") We shall recognize and utilize this kind of
"incomplete information" (according to the above, rather to be called "imperfect
information"). But we reject all other types, vaguely defined by the use of concepts
like complication, intelligence, etc.
2 Our theory attributes these phenomena to the possibility of multiple "stable
standards of behavior" cf 4.6. and the end of 4.7.
* More precisely: the socalled "indirectly dependent expected utility."
SOLUTIONS AND STANDARDS OF BEHAVIOR 31
the maximum effort which he will be willing to make if he behaves accord
ing to the customary criteria of rationality in order to obtain a further
unit of that commodity.
It is not clear at all, however, what significance it has in determining
the behavior of a participant in a social exchange economy. We saw that
the principles of rational behavior in this case still await formulation, and
that they are certainly not expressed by a maximum requirement of the
Crusoe type. Thus it must be uncertain whether marginal utility has any
meaning at all in this case. 1
Positive statements on this subject will be possible only after we have
succeeded in developing a theory of rational behavior in a social exchange
economy, that is, as was stated before, with the help of the theory of
"games of strategy." It will be seen that marginal utility does, indeed,
play an important role in this case too, but in a more subtle way than is
usually assumed.
4. Structure of the Theory : Solutions and Standards of Behavior
4.1. The Simplest Concept of a Solution for One Participant
4.1.1. We have now reached the point where it becomes possible to
give a positive description of our proposed procedure. This means pri
marily an outline and an account of the main technical concepts and
devices.
As we stated before, we wish to find the mathematically complete
principles which define " rational behavior" for the participants in a social
economy, and to derive from them the general characteristics of that
behavior. And while the principles ought to be perfectly general i.e.,
valid in all situations we may be satisfied if we can find solutions, for the
moment, only in some characteristic special cases.
First of all we must obtain a clear notion of what can be accepted as a
solution of this problem; i.e., what the amount of information is which a
solution must convey, and what we should expect regarding its formal
structure, A precise analysis becomes possible only after these matters
have been clarified.
4.1.2. The immediate concept of a solution is plausibly a set of rules for
each participant which tell him how to behave in every situation which may
conceivably arise. One may object at this point that this view is unneces
sarily inclusive. Since we want to theorize about " rational behavior," there
seems to be no need to give the individual advice as to his behavior in
situations other than those which arise in a rational community. This
would justify assuming rational behavior on the part of the others as well,
in whatever way we are going to characterize that. Such a procedure
would probably lead to a unique sequence of situations to which alone our
theory need refer.
1 All this is understood within the domain of our several simplifying assumptions. If
they are relaxed, then various further difficulties ensue.
32 FORMULATION OF THE ECONOMIC PROBLEM
This objection seems to be invalid for two reasons:
First, the "rules of the game," i.e. the physical laws which give the
factual background of the economic activities under consideration may be
explicitly statistical The actions of the participants of the economy may
determine the outcome only in conjunction with events which depend on
chance (with known probabilities), cf. footnote 2 on p. 10 and 6.2.1. If
this is taken into consideration, then the rules of behavior even in a perfectly
rational community must provide for a great variety of situations some of
which will be very far from optimum. 1
Second, and this is even more fundamental, the rules of rational behavior
must provide definitely for the possibility of irrational conduct on the part
of others. In other words: Imagine that we have discovered a set of rules
for all participants to be termed as "optimal" or "rational" each of
which is indeed optimal provided that the other participants conform.
Then the question remains as to what will happen if some of the participants
do not conform. If that should turn out to be advantageous for them and,
quite particularly, disadvantageous to the conformists then the above
"solution" would seem very questionable. We are in no position to give a
positive discussion of these things as yet but we want to make it clear
that under such conditions the "solution," or at least its motivation, must
be considered as imperfect and incomplete. In whatever way we formulate
the guiding principles and the objective justification of "rational behavior,"
provisos will have to be made for every possible conduct of "the others."
Only in this way can a satisfactory and exhaustive theory be developed.
But if the superiority of "rational behavior" over any other kind is to be
established, then its description must include rules of conduct for all
conceivable situations including those where "the others" behaved
irrationally, in the sense of the standards which the theory will set for them.
4.1,3. At this stage the reader will observe a great similarity with the
everyday concept of games. We think that this similarity is very essential;
indeed, that it is more than that. For economic and social problems the
games fulfill or should fulfill the same function which various geometrico
mathematical models have successfully performed in the physical sciences.
Such models are theoretical constructs with a precise, exhaustive and not
too complicated definition; and they must be similar to reality in those
respects which are essential in the investigation at hand. To reca
pitulate in detail: The definition must be precise and exhaustive in
order to make a mathematical treatment possible. The construct must
not be unduly complicated, so that the mathematical treatment can be
brought beyond the mere formalism to the point where it yields complete
numerical results. Similarity to reality is needed to make the operation
significant. And this similarity must usually be restricted to a few traits
1 That a unique optimal behavior is at all conceivable in spite of the multiplicity of
the possibilities determined by chance, is of course due to the use of the notion of "mathe
matical expectation. 1 ' Cf. loc. cit. above.
SOLUTIONS AND STANDARDS OF BEHAVIOR 33
deemed "essential" pro tempore since otherwise the above requirements
would conflict with each other. 1
It is clear that if a model of economic activities is constructed according
to these principles, the description of a game results. This is particularly
striking in the formal description of markets which are after all the core
of the economic system but this statement is true in all cases and without
qualifications.
4.1.4. We described in 4.1.2. what we expect a solution i.e. a character
ization of " rational behavior " to consist of. This amounted to a complete
set of rules of behavior in all conceivable situations. This holds equiv
alently for a social economy and for games. The entire result in the
above sense is thus a combinatorial enumeration of enormous complexity.
But we have accepted a simplified concept of utility according to which all
the individual strives for is fully described by one numerical datum (cf.
2.1.1. and 3.3.). Thus the complicated combinatorial catalogue which
we expect from a solution permits a very brief and significant summariza
tion: the statement of how much 2  3 the participant under consideration can
get if he behaves " rationally. " This "can get" is, of course, presumed to
be a minimum; he may get more if the others make mistakes (behave
irrationally).
It ought to be understood that all this discussion is advanced, as it
should be, preliminary to the building of a satisfactory theory along the
lines indicated. We formulate desiderata which will serve as a gauge of
success in our subsequent considerations; but it is in accordance with the
usual heuristic procedure to reason about these desiderata even before
we are able to satisfy them. Indeed, this preliminary reasoning is an
essential part of the process of finding a satisfactory theory. 4
4.2. Extension to All Participants
4.2.1. We have considered so far only what the solution ought to be for
one participant. Let us now visualize all participants simultaneously.
I.e., let us consider a social economy, or equivalently a game of a fixed
number of (say n) participants. The complete information which a solution
should convey is, as we discussed it, of a combinatorial nature. It was
indicated furthermore how a single quantitative statement contains the
decisive part of this information, by stating how much each participant
1 E.g., Newton's description of the solar system by a small number of "masspomts."
These points attract each other and move like the stars; this is the similarity in the essen
tials, while the enormous wealth of the other physical features of the planets has been left
out of account.
2 Utility; for an entrepreneur, profit ; for a player, gain or loss.
8 We mean, of course, the "mathematical expectation," if there is an explicit element
of chance. Cf. the first remark in 4.1.2. and also the discussion of 3.7.1.
4 Those who are familiar with the development of physics will know how important
such heuristic considerations can be. Neither general relativity nor quantum mechanics
could have been found without a "T>rAthpnrptip*i ;/ Hiannaainn nf thft HpaidemtA concern
ing the theorytobe.
34 FORMULATION OF THE ECONOMIC PROBLEM
obtains by behaving rationally. Consider these amounts which the several
participants ''obtain. " If the solution did nothing more in the quantitative
sense than specify these amounts, 1 then it would coincide with the well
known concept of imputation: it would just state how the total proceeds
are to be distributed among the participants. 2
We emphasize that the problem of imputation must be solved both
when the total proceeds are in fact identically zero and when they are vari
able. This problem, in its general form, has neither been properly formu
lated nor solved in economic literature.
4.2.2. We can see no reason why one should not be satisfied with a
solution of this nature, providing it can be found: i.e. a single imputation
which meets reasonable requirements for optimum (rational) behavior.
(Of course we have not yet formulated these requirements. For an exhaus
tive discussion, cf. loc. cit. below.) The structure of the society under con
sideration would then be extremely simple: There would exist an absolute
state of equilibrium in which the quantitative share of every participant
would be precisely determined.
It will be seen however that such a solution, possessing all necessary
properties, does not exist in general. The notion of a solution will have
to be broadened considerably, and it will be seen that this is closely con
nected with certain inherent features of social organization that are well
known from a " common sense " point of view but thus far have not been
viewed in proper perspective. (Cf. 4.6. and 4.8.1.)
4.2.3. Our mathematical analysis of the problem will show that there
exists, indeed, a not inconsiderable family of games where a solution can be
defined and found in the above sense: i.e. as one single imputation. In
such cases every participant obtains at least the amount thus imputed to
him by just behaving appropriately, rationally. Indeed, he gets exactly
this amount if the other participants too behave rationally; if they do not,
he may get even more.
These are the games of two participants where the sum of all payments
is zero. While these games are not exactly typical for major economic
processes, they contain some universally important traits of all games and
the results derived from them are the basis of the general theory of games.
We shall discuss them at length in Chapter III.
4.3. The Solution as a Set of Imputations
4.3.1. If either of the two above restrictions is dropped, the situation is
altered materially.
1 And of course, in the combinatorial sense, as outlined above, the procedure how to
obtain them.
*In games as usually understood the total proceeds are always zero; i.e. one
participant can gain only what the others lose. Thus there is a pure problem of distri
bution i.e. imputation and absolutely none of increasing the total utility, the "social
product." In all economic questions the latter problem arises as well, but the question
of imputation remains. Subsequently we shall broaden the concept of a game by drop
ping the requirement of the total proceeds being zero (cf. Ch. XI).
SOLUTIONS AND STANDARDS OF BEHAVIOR 35
The simplest game where the second requirement is overstepped is a
twoperson game where the sum of all payments is variable. This cor
responds to a social economy with two participants and allows both for
their interdependence and for variability of total utility with their behavior. 1
As a matter of fact this is exactly the case of a bilateral monopoly (cf.
6L2.61.6.). The well known "zone of uncertainty " which is found in
current efforts to solve the problem of imputation indicates that a broader
concept of solution must be sought. This case will be discussed loc. cit.
above. For the moment we want to use it only as an indicator of the diffi
culty and pass to the other case which is more suitable as a basis for a first
positive step.
4.3.2. The simplest game where the first requirement is disregarded is a
threeperson game where the sum of all payments is zero. In contrast to
the above twoperson game, this does not correspond to any fundamental
economic problem but it represents nevertheless a basic possibility in human
relations. The essential feature is that any two players who combine and
cooperate against a third can thereby secure an advantage. The problem
is how this advantage should be distributed among the two partners in this
combination. Any such scheme of imputation will have to take into
account that any two partners can combine; i.e. while any one combination
is in the process of formation, each partner must consider the fact that his
prospective ally could break away and join the third participant.
Of course the rules of the game will prescribe how the proceeds of a
coalition should be divided between the partners. But the detailed dis
cussion to be given in 22.1. shows that this will not be, in general, the
final verdict. Imagine a game (of three or more persons) in which two
participants can form a very advantageous coalition but where the rules
of the game provide that the greatest part of the gain goes to the first
participant. Assume furthermore that the second participant of this
coalition can also enter a coalition with the third one, which is less effective
in toto but promises him a greater individual gain than the former. In
this situation it is obviously reasonable for the first participant to transfer
a part of the gains which he could get from the first coalition to the second
participant in order to save this coalition. In other words: One must
expect that under certain conditions one participant of a coalition will be
willing to pay a compensation to his partner. Thus the apportionment
within a coalition depends not only upon the rules of the game but
also upon the above principles, under the influence of the alternative
coalitions. 2
Common sense suggests that one cannot expect any theoretical state
ment as to which alliance will be formed 3 but only information concerning
1 It will be remembered that we make use of a transferable utility, cf. 2.1.1.
* This does not mean that the rules of the game are violated, since such compensatory
payments, if made at all, are made freely in pursuance of a rational consideration.
1 Obviously three combinations of two partners each are possible. In the example
to be given in 21., any preference within the solution for a particular alliance will be a
36 FORMULATION OF THE ECONOMIC PROBLEM
how the partners in a possible combination must divide the spoils in order
to avoid the contingency that any one of them deserts to form a combination
with the third player. All this will be discussed in detail and quantitatively
in Ch. V.
It suffices to state here only the result which the above qualitative
considerations make plausible and which will be established more rigorously
loc. cit. A reasonable concept of a solution consists in this case of a system
of three imputations. These correspond to the abovementioned three
combinations or alliances and express the division of spoils between respec
tive allies.
4.3.3. The last result will turn out to be the prototype of the general
situation. We shall see that a consistent theory will result from looking
for solutions which are not single imputations, but rather systems of
imputations.
It is clear that in the above threeperson game no single imputation
from the solution is in itself anything like a solution. Any particular
alliance describes only one particular consideration which enters the minds
of the participants when they plan their behavior. Even if a particular
alliance is ultimately formed, the division of the proceeds between the allies
will be decisively influenced by the other alliances which each one might
alternatively have entered. Thus only the three alliances and their
imputations together form a rational whole which determines all of its
details and possesses a stability of its own. It is, indeed, this whole which
is the really significant entity, more so than its constituent imputations.
Even if one of these is actually applied, i.e. if one particular alliance is
actually formed, the others are present in a "virtual" existence: Although
they have not materialized, they have contributed essentially to shaping and
determining the actual reality.
In conceiving of the general problem, a social economy or equivalently
a game of n participants, we shall with an optimism which can be justified
only by subsequent success expect the same thing: A solution should be a
system of imputations 1 possessing in its entirety some kind of balance and
stability the nature of which we shall try to determine. We emphasize
that this stability whatever it may turn out to be will be a property
of the system as a whole and not of the single imputations of which it is
composed. These brief considerations regarding the threeperson game
have illustrated this point.
4.3.4. The exact criteria which characterize a system of imputations as a
solution of our problem are, of course, of a mathematical nature. For a
precise and exhaustive discussion we must therefore refer the reader to the
subsequent mathematical development of the theory. The exact definition
limine excluded by symmetry. I.e. the game will be symmetric with respect to all three
participants. Of. however 33.1.1.
1 They may again include compensations between partners in a coalition, as described
in 4.3.2.
SOLUTIONS AND STANDARDS OF BEHAVIOR 37
itself is stated in 30.1.1. We shall nevertheless undertake to give a prelimi
nary, qualitative outline. We hope this will contribute to the understanding
of the ideas on which the quantitative discussion is based. Besides, the
place of our considerations in the general framework of social theory will
become clearer.
4.4. The Intransitive Notion of "Superiority" or "Domination"
4.4.1. Let us return to a more primitive concept of the solution which we
know already must be abandoned. We mean the idea of a solution as a
single imputation. If this sort of solution existed it would have to be an
imputation which in some plausible sense was superior to all other imputa
tions. This notion of superiority as between imputations ought to be
formulated in a way which takes account of the physical and social struc
ture of the milieu. That is, one should define that an imputation x is
superior to an imputation y whenever this happens: Assume that society,
i.e. the totality of all participants, has to consider the question whether or
not to "accept" a static settlement of all questions of distribution by the
imputation y. Assume furthermore that at this moment the alternative
settlement by the imputation x is also considered. Then this alternative x
will suffice to exclude acceptance of y. By this we mean that a sufficient
number of participants prefer in their own interest x to i/, and are convinced
or can be convinced of the possibility of obtaining the advantages of x.
In this comparison of x to y the participants should not be influenced by
the consideration of any third alternatives (imputations). I.e. we conceive
the relationship of superiority as an elementary one, correlating the two
imputations x and y only. The further comparison of three or more
ultimately of all imputations is the subject of the theory which must
now follow, as a superstructure erected upon the elementary concept of
superiority.
Whether the possibility of obtaining certain advantages by relinquishing
y for x y as discussed in the above definition, can be made convincing to the
interested parties will depend upon the physical facts of the situation in
the terminology of games, on the rules of the game.
We prefer to use, instead of " superior" with its manifold associations, a
word more in the nature of a terminus technicus. When the above described
relationship between two imputations x and y exists, 1 then we shall say
that x dominates y.
If one restates a little more carefully what should be expected from a
solution consisting of a single imputation, this formulation obtains: Such
an imputation should dominate all others and be dominated by
none.
4.4.2. The notion of domination as formulated or rather indicated
above is clearly in the nature of an ordering, similar to the question of
1 That is, when it holds in the mathematically precise form, which will be given in
30.1.1.
38 FORMULATION OF THE ECONOMIC PROBLEM
preference, or of size in any quantitative theory. The notion of a single
imputation solution 1 corresponds to that of the first element with respect
to that ordering. 2
The search for such a first element would be a plausible one if the order
ing in question, i.e. our notion of domination, possessed the important
property of transitivity ; that is, if it were true that whenever x dominates
y and y dominates z, then also x dominates z. In this case one might proceed
as follows: Starting with an arbitrary x, look for a y which dominates a:; if
such a y exists, choose one and look for a z which dominates y\ if such a z
exists, choose one and look for a u which dominates z, etc. In most practical
problems there is a fair chance that this process either terminates after a
finite number of steps with a w which is undominated by anything else, or
that the sequence x, y, z, u, , goes on ad infinitum, but that these
x, y, z, u, tend to a limiting position w undominated by anything else.
And, due to the transitivity referred to above, the final w will in either case
dominate all previously obtained x, y, z, w, .
We shall not go into more elaborate details which could and should
be given in an exhaustive discussion. It will probably be clear to the reader
that the progress through the sequence #, y, z, u,  corresponds to
successive " improvements " culminating in the " optimum," i.e. the "first"
element w which dominates all others and is not dominated.
All this becomes very different when transitivity does not prevail.
In that case any attempt to reach an "optimum" by successive improve
ments may be futile. It can happen that x is dominated by y y y by z, and
z in turn by x. 8
4.4.3. Now the notion of domination on which we rely is, indeed, not
transitive. In our tentative description of this concept we indicated that x
dominates y when there exists a group of participants each one of whom
prefers his individual situation in x to that in y, and who are convinced
that they are able as a group i.e. as an alliance to enforce their prefer
ences. We shall discuss these matters in detail in 30.2. This group of
participants shall be called the "effective set" for the domination of x over y.
Now when x dominates y and y dominates z, the effective sets for these two
dominations may be entirely disjunct and therefore no conclusions can be
drawn concerning the relationship between z and x. It can even happen
that z dominates x with the help of a third effective set, possibly disjunct'
from both previous ones.
1 We continue to use it as an illustration although we have shown already that it is a
forlorn hope. The reason for this is that, by showing what is involved if certain complica
tions did not arise, we can put these complications into better perspective. Our real
interest at this stage lies of course in these complications, which are quite fundamental.
1 The mathematical theory of ordering is very simple and leads probably to a deeper
understanding of these conditions than any purely verbal discussion. The necessary
mathematical considerations will be found in 65.3.
8 In the case of transitivity this is impossible because if a proof be wanted x never
dominates itself. Indeed, if e.g. y dominates x, z dominates y, and x dominates z, then
we can infer by transitivity that x dominates x.
SOLUTIONS AND STANDARDS OF BEHAVIOR 39
This lack of transitivity, especially in the above formalistic presentation,
may appear to be an annoying complication and it may even seem desirable
to make an effort to rid the theory of it. Yet the reader who takes another
look at the last paragraph will notice that it really contains only a circum
locution of a most typical phenomenon in all social organizations. The
domination relationships between various imputations z, i/, z, i.e.
between various states of society correspond to the various ways in which
these can unstabilize i.e. upset each other. That various groups of
participants acting as effective sets in various relations of this kind may
bring about "cyclical" dominations e.g., y over x, z over y, and x over z
is indeed one of the most characteristic difficulties which a theory of these
phenomena must face.
4.5. The Precise Definition of a Solution
4.6.1. Thus our task is to replace the notion of the optimum i.e. of the
first element by something which can take over its functions in a static
equilibrium. This becomes necessary because the original concept has
become untenable. We first observed its breakdown in the specific instance
of a certain threeperson game in 4.3.2.4.3.3. But now we have acquired
a deeper insight into the cause of its failure: it is the nature of our concept of
domination, and specifically its intransitivity.
This type of relationship is not at all peculiar to our problem. Other
instances of it are well known in many fields and it is to be regretted that
they have never received a generic mathematical treatment. We mean all
those concepts which are in the general nature of a comparison of preference
or "superiority," or of order, but lack transitivity: e.g., the strength of
chess players in a tournament, the "paper form" in sports and races, etc. 1
4.5.2. The discussion of the threeperson game in 4.3.2.4.3.3. indicated
that the solution will be, in general, a set of imputations instead of a single
imputation. That is, the concept of the "first element" will have to be
replaced by that of a set of elements (imputations) with suitable properties.
In the exhaustive discussion of this game in 32. (cf. also the interpreta
tion in 33.1.1. which calls attention to some deviations) the system of three
imputations, which was introduced as the solution of the threeperson game in
4.3.2.4.3.3., will be derived in an exact way with the help of the postulates
of 30.1.1. These postulates will be very similar to those which character
ize a first element. They are, of course, requirements for a set of elements
(imputations), but if that set should turn out to consist of a single element
only, then our postulates go over into the characterization of the first
element (in the total system of all imputations).
We do not give a detailed motivation for those postulates as yet, but we
shall formulate them now hoping that the reader will find them to be some
1 Some of these problems have been treated mathematically by the introduction of
chance and probability. Without denying that this approach has a certain justification,
we doubt whether it is conducive to a complete understanding even in those cases. It
would be altogether inadequate for our considerations of social organization.
40 FORMULATION OF THE ECONOMIC PROBLEM
what plausible. Some reasons of a qualitative nature, or rather one possible
interpretation, will be given in the paragraphs immediately following.
4.5.3. The postulates are as follows: A set S of elements (imputations)
is a solution when it possesses these two properties:
(4:A:a) No y contained in S is dominated by an x contained in S.
(4:A:b) Every y not contained in S is dominated by some x con
tained in S.
(4:A:a) and (4:A:b) can be stated as a single condition:
(4:A:c) The elements of S are precisely those elements which are
undominated by elements of S. 1
The* reader who is interested in this type of exercise may now verify
our previous assertion that for a set S which consists of a single element x
the above conditions express precisely that x is the first element.
4.5.4. Part of the malaise which the preceding postulates may cause at
first sight is probably due to their circular character. This is particularly
obvious in the form (4:A:c), where the elements of S are characterized by a
relationship which is again dependent upon S. It is important not to
misunderstand the meaning of this circumstance.
Since our definitions (4:A:a) and (4:A:b), or (4:A:c), are circular i.e.
implicit for S, it is not at all clear that there really exists an S which
fulfills them, nor whether if there exists one the S is unique. Indeed
these questions, at this stage still unanswered, are the main subject of the
subsequent theory. What is clear, however, is that these definitions tell
unambiguously whether any particular S is or is not a solution. If one
insists on associating with the concept of a definition the attributes of
existence and uniqueness of the object defined, then one must say: We
have not given a definition of S, but a definition of a property of S we
have not defined the solution but characterized all possible solutions.
Whether the totality of all solutions, thus circumscribed, contains no S,
exactly one S, or several <S's, is subject for further inquiry. 2
4.6. Interpretation of Our Definition in Terms of "Standards of Behavior"
4.6.1. The single imputation is an often used and well understood con
cept of economic theory, while the sets of imputations to which we have
been led are rather unfamiliar ones. It is therefore desirable to correlate
them with something which has a well established place in our thinking
concerning social phenomena.
1 Thus (4:A:c) is an exact equivalent of (4:A:a) and (4:A:b) together. It may impress
the mathematically untrained reader as somewhat involved, although it is really a
straightforward expression of rather simple ideas.
2 It should be unnecessary to say that the circularity, or rather implicitness, of
(4:A:a) and (4:A:b), or (4:A:c), does not at all mean that they are tautological. They
express, of course, a very serious restriction of S.
SOLUTIONS AND STANDARDS OF BEHAVIOR 41
Indeed, it appears that the sets of imputations S which we are consider
ing correspond to the "standard of behavior " connected with a social
organization. Let us examine this assertion more closely.
Let the physical basis of a social economy be given, or, to take a
broader view of the matter, of a society. 1 According to all tradition and
experience human beings have a characteristic way of adjusting themselves
to such a background. This consists of not setting up one rigid system of
apportionment, i.e. of imputation, but rather a variety of alternatives,
which will probably all express some general principles but nevertheless
differ among themselves in many particular respects. 2 This system of
imputations describes the " established order of society " or " accepted
standard of behavior/ 1
Obviously no random grouping of imputations will do as such a " stand
ard of behavior ": it will have to satisfy certain conditions which character
ize it as a possible order of things. This concept of possibility must clearly
provide for conditions of stability. The reader will observe, no doubt,
that our procedure in the previous paragraphs is very much in this spirit:
The sets S of imputations x, y, z, correspond to what we now call
"standard of behavior/' and the conditions (4:A:a) and (4:A:b), or (4:A:c),
which characterize the solution S express, indeed, a stability in the above
sense.
4.6.2. The disjunction into (4:A:a) and (4:A:b) is particularly appropri
ate in this instance. Recall that domination of y by x means that the
imputation x, if taken into consideration, excludes acceptance of the
imputation y (this without forecasting what imputation will ultimately be
accepted, cf. 4.4.1. and 4.4.2.). Thus (4:A:a) expresses the fact that the
standard of behavior is free from inner contradictions: No imputation y
belonging to S i.e. conforming with the "accepted standard of behavior"
can be upset i.e. dominated by another imputation x of the same kind.
On the other hand (4:A:b) expresses that the "standard of behavior " can
be used to discredit any nonconforming procedure: Every imputation y
not belonging to S can be upset i.e. dominated by an imputation x
belonging to S.
Observe that we have not postulated in 4.5.3. that a y belonging to S
should never be dominated by any x. 3 Of course, if this should happen, then
x would have to be outside of S, due to (4:A:a). In the terminology of
social organizations: An imputation y which conforms with the "accepted
1 In the case of a game this means simply as we have mentioned before that the
rules of the game are given. But for the present simile the comparison with a social
economy is more useful. We suggest therefore that the reader forget temporarily the
analogy with games and think entirely in terms of social organization.
2 There may be extreme, or to use a mathematical term, "degenerate" special cases
where the setup is of such exceptional simplicity that a rigid single apportionment can
be put into operation. But it seems legitimate to disregard them as nontypical.
8 It can be shown, cf. (31 :M) in 31.2.3., that such a postulate cannot be fulfilled
in general; i.e. that in all really interesting cases it is impossible to find an S which satisfies
it together with our other requirements.
42 FORMULATION OF THE ECONOMIC PROBLEM
standard of behavior" may be upset by another imputation x, but in this
case it is certain that x does not conform. l It follows from our other require
ments that then x is upset in turn by a third imputation z which again
conforms. Since y and z both conform, z cannot upset y a further illustra
tion of the intransitivity of " domination. "
Thus our solutions S correspond to such " standards of behavior ' as
have an inner stability: once they are generally accepted they overrule
everything else and no part of them can be overruled within the limits of
the accepted standards. This is clearly how things are in actual social
organizations, and it emphasizes the perfect appropriateness of the circular
character of our conditions in 4.5.3.
4.6.3. We have previously mentioned, but purposely neglected to dis
cuss, an important objection: That neither the existence nor the uniqueness
of a solution S in the sense of the conditions (4:A:a) and (4:A:b), or (4:A:c),
in 4.5.3. is evident or established.
There can be, of course, no concessions as regards existence. If it
should turn out that our requirements concerning a solution S are, in any
special case, unfulfillable, this would certainly necessitate a fundamental
change in the theory. Thus a general proof of the existence of solutions S
for all particular cases 2 is most desirable. It will appear from our subse
quent investigations that this proof has not yet been carried out in full
generality but that in all cases considered so far solutions were found.
As regards uniqueness the situation is altogether different. The often
mentioned " circular" character of our requirements makes it rather
probable that the solutions are not in general unique. Indeed we shall in
most cases observe a multiplicity of solutions. 3 Considering what we have
said about interpreting solutions as stable " standards of behavior" this has
a simple and not unreasonable meaning, namely that given the same
physical background different " established orders of society" or " accepted
standards of behavior" can be built, all possessing those characteristics of
inner stability which we have discussed. Since this concept of stability
is admittedly of an "inner" nature i.e. operative only under the hypothesis
of general acceptance of the standard in question these different standards
may perfectly well be in contradiction with each other.
4.6.4. Our approach should be compared with the widely held view
that a social theory is possible only on the basis of some preconceived
principles of social purpose. These principles would include quantitative
statements concerning both the aims to be achieved in toto and the appor
tionments between individuals. Once they are accepted, a simple maximum
problem results.
1 We use the word "conform" (to the "standard of behavior'') temporarily as a
synonym for being contained in S, and the word "upset" as a synonym for dominate.
*In the terminology of games: for all numbers of participants and for all possible
rules of the game.
1 An interesting exception is 65.8.
SOLUTIONS AND STANDARDS OF BEHAVIOR 43
Let us note that no such statement of principles is ever satisfactory
per se, and the arguments adduced in its favor are usually either those of
inner stability or of less clearly defined kinds of desirability, mainly con
cerning distribution.
Little can be said about the latter type of motivation. Our problem
is not to determine what ought to happen in pursuance of any set of
necessarily arbitrary a priori principles, but to investigate where the
equilibrium of forces lies.
As far as the first motivation is concerned, it has been our aim to give
just those arguments precise and satisfactory form, concerning both global
aims and individual apportionments. This made it necessary to take up
the entire question of inner stability as a problem in its own right. A theory
which is consistent at this point cannot fail to give a precise account of the
entire interplay of economic interests, influence and power.
4.7. Games and Social Organizations
4.7. It may now be opportune to revive the analogy with games, which
we purposely suppressed in the previous paragraphs (cf. footnote 1 on
p. 41). The parallelism between the solutions S in the sense of 4.5.3. on
one hand and of stable " standards of behavior " on the other can be used
for corroboration of assertions concerning these concepts in both directions.
At least we hope that this suggestion will have some appeal to the reader.
We think that the procedure of the mathematical theory of games of
strategy gains definitely in plausibility by the correspondence which exists
between its concepts and those of social organizations. On the other
hand, almost every statement which we or for that matter anyone else
ever made concerning social organizations, runs afoul of some existing
opinion. And, by the very nature of things, most opinions thus far could
hardly have been proved or disproved within the field of social theory.
It is therefore a great help that all our assertions can be borne out by specific
examples from the theory of games of strategy.
Such is indeed one of the standard techniques of using models in the
physical sciences. This twoway procedure brings out a significant func
tion of models, not emphasized in their discussion in 4.1.3.
To give an illustration: The question whether several stable " orders
of society " or " standards of behavior " based on the same physical back
ground are possible or not, is highly controversial. There is little hope
that it will be settled by the usual methods because of the enormous com
plexity of this problem among other reasons. But we shall give specific
examples of games of three or four persons, where one game possesses several
solutions in the sense of 4.5.3. And some of these examples will be seen
to be models for certain simple economic problems. (Cf. 62.)
4.8. Concluding Remarks
4.8.1. In conclusion it remains to make a few remarks of a more formal
nature.
44 FORMULATION OF THE ECONOMIC PROBLEM
We begin with this observation: Our considerations started with single
imputations which were originally quantitative extracts from more
detailed combinatorial sets of rules. From these we had to proceed to
sets S of imputations, which under certain conditions appeared as solutions.
Since the solutions do not seem to be necessarily unique, the complete
answer to any specific problem consists not in finding a solution, but in
determining the set of all solutions. Thus the entity for which we look in
any particular problem is really a set of sets of imputations. This may seem
to be unnaturally complicated in itself; besides there appears no guarantee
that this process will not have to be carried further, conceivably because
of later difficulties. Concerning these doubts it suffices to say: First, the
mathematical structure of the theory of games of strategy provides a formal
justification of our procedure. Second, the previously discussed connections
with " standards of behavior " (corresponding to sets of imputations) and
of the multiplicity of " standards of behavior " on the same physical back
ground (corresponding to sets of sets of imputations) make just this amount
of complicatedness desirable.
One may criticize our interpretation of sets of imputations as " standards
of behavior." Previously in 4.1.2. and 4.1.4. we introduced a more ele
mentary concept, which may strike the reader as a direct formulation of a
" standard of behavior": this was the preliminary combinatorial concept
of a solution as a set of rules for each participant, telling him how to behave
in every possible situation of the game. (From these rules the single
imputations were then extracted as a quantitative summary, cf. above.)
Such a simple view of the " standard of behavior" could be maintained,
however, only in games in which coalitions and the compensations between
coalition partners (cf. 4.3.2.) play no role, since the above rules do not
provide for these possibilities. Games exist in which coalitions and compen
sations can be disregarded: e.g. the twoperson game of zerosum mentioned
in 4.2.3., and more generally the " inessential " games to be discussed in
27.3. and in (31 :P) of 31.2.3. But the general, typical game in particular
all .significant problems of a social exchange economy cannot be treated with
out these devices. Thus the same arguments which forced us to consider sets
of imputations instead of single imputations necessitate the abandonment
of that narrow concept of " standard of behavior." Actually we shall call
these sets of rules the " strategies" of the game.
4.8.2. The next subject to be mentioned concerns the static or dynamic
nature of the theory. We repeat most emphatically that our theory is
thoroughly static. A dynamic theory would unquestionably be more
complete and therefore preferable. But there is ample evidence from other
branches of science that it is futile to try to build one as long as the static
side is not thoroughly understood. On the other hand, the reader may
object to some definitely dynamic arguments which were made in the course
of our discussions. This applies particularly to all considerations concern
ing the interplay of various imputations under the influence of "domina
SOLUTIONS AND STANDARDS OF BEHAVIOR 45
tion," cf. 4.6.2. We think that this is perfectly legitimate. A static
theory deals with equilibria. 1 The essential characteristic of an equilibrium
is that it has no tendency to change, i.e. that it is not conducive to dynamic
developments. An analysis of this feature is, of course, inconceivable
without the use of certain rudimentary dynamic concepts. The important
point is that they are rudimentary. In other words: For the real dynamics
which investigates the precise motions, usually far away from equilibria, a
much deeper knowledge of these dynamic phenomena is required. 2 ' 3
4.8.3. Finally let us note a point at which the theory of social phenomena
will presumably take a very definite turn away from the existing patterns of
mathematical physics. This is, of course, only a surmise on a subject where
much uncertainty and obscurity prevail.
Our static theory specifies equilibria i.e. solutions in the sense of 4.5.3.
which are sets of imputations. A dynamic theory when one is found
will probably describe the changes in terms of simpler concepts: of a single
imputation valid at the moment under consideration or something
similar. This indicates that the formal structure of this part of the theory
the relationship between statics and dynamics may be generically different
from that of the classical physical theories. 4
All these considerations illustrate once more what a complexity of
theoretical forms must be expected in social theory. Our static analysis
alone necessitated the creation of a conceptual and formal mechanism which
is very different from anything used, for instance, in mathematical physics.
Thus the conventional view of a solution as a uniquely defined number or
aggregate of numbers was seen to be too narrow for our purposes, in spite
of its success in other fields. The emphasis on mathematical methods
seems to be shifted more towards combinatorics and set theory and away
from the algorithm of differential equations which dominate mathematical
physics.
1 The dynamic theory deals also with inequilibria even if they are sometimes called
"dynamic equilibria."
1 The above discussion of statics versus dynamics is, of course, not at all a construction
ad hoc. The reader who is familiar with mechanics for instance will recognize in it a
reformulation of well known features of the classical mechanical theory of statics and
dynamics. What we do claim at this time is that this is a general characteristic of
scientific procedure involving forces and changes in structures.
3 The dynamic concepts which enter into the discussion of static equilibria are parallel
to the "virtual displacements " in classical mechanics. The reader may also remember at
this point the remarks about "virtual existence 1 ' in 4.3.3.
4 Particularly from classical mechanics. The analogies of the type used in footnote 2
above, cease at this point.
CHAPTER II
GENERAL FORMAL DESCRIPTION OF GAMES OF STRATEGY
5. Introduction
5.1. Shift of Emphasis from Economics to Games
5.1. It should be clear from the discussions of Chapter I that a theory
of rational behavior i.e. of the foundations of economics and of the main
mechanisms of social organization requires a thorough study of the " games
of strategy/' Consequently we must now take up the theory of games as an
independent subject. In studying it as a problem in its own right, our
point of view must of necessity undergo a serious shift. In Chapter I our
primary interest lay in economics. It was only after having convinced
ourselves of the impossibility of making progress in that field without a
previous fundamental understanding of the games that we gradually
approached the formulations and the questions which are partial to that
subject. But the economic viewpoints remained nevertheless the dominant
ones in all of Chapter I. From this Chapter II on, however, we shall have
to treat the games as games. Therefore we shall not mind if some points
taken up have no economic connections whatever, it would not be possible
to do full justice to the subject otherwise. Of course most of the main
concepts are still those familiar from the discussions of economic literature
(cf. the next section) but the details will often be altogether alien to it
and details, as usual, may dominate the exposition and overshadow the
guiding principles.
5.2. General Principles of Classification and of Procedure
5.2.1. Certain aspects of " games of strategy" which were already
prominent in the last sections of Chapter I will not appear in the beginning
stages of the discussions which we are now undertaking. Specifically:
There will be at first no mention of coalitions between players and the
compensations which they pay to each other. (Concerning these, cf.
4.3.2., 4.3.3., in Chapter I.) We give a brief account of the reasons, which
will also throw some light on our general disposition of the subject.
An important viewpoint in classifying games is this: Is the sum of all
payments received by all players (at the end of the game) always zero; or
is this not the case? If it is zero, then one can say that the players pay only
to each other, and that no production or destruction of goods is involved.
All games which are actually played for entertainment are of this type. But
the economically significant schemes are most essentially not such. There
the sum of all payments, the total social product, will in general not be
46
INTRODUCTION 47
zero, and not even constant. I.e., it will depend on the behavior of the
players the participants in the social economy. This distinction was
already mentioned in 4.2.1., particularly in footnote 2, p. 34. We shall call
games of the firstmentioned type zerosum games, and those of the latter
type nonzerosum games.
We shall primarily construct a theory of the zerosum games, but it will
be found possible to dispose, with its help, of all games, without restriction.
Precisely: We shall show that the general (hence in particular the variable
sum) nperson game can be reduced to a zerosum n + 1person game.
(Cf. 56.2.2.) Now the theory of the zerosum r?person game will be based
on the special case of the zerosum twoperson game. (Cf. 25.2.) Hence
our discussions will begin with a theory of these games, which will indeed
be carried out in Chapter III.
Now in zerosum twoperson games coalitions and compensations
can play no role. 1 The questions which are essential in these games are
of a different nature. These are the main problems: How does each
player plan his course i.e. how does one formulate an exact concept of a
strategy? What information is available to each player at every stage
of the game? What is the role of a player being informed about the other
player's strategy? About the entire theory of the game?
5.2.2. All these questions are of course essential in all games, for any
number of players, even when coalitions and compensations have come into
their own. But for zerosum twoperson games they are the only ones
which matter, as our subsequent discussions will show. Again, all these
questions have been recognized as important in economics, but we think that
in the theory of games they arise in a more elementary as distinguished
from composite fashion. They can, therefore, be discussed in a precise
way and as we hope to show be disposed of. But in the process of this
analysis it will be technically advantageous to rely on pictures and examples
which are rather remote from the field of economics proper, and belong
strictly to the field of games of the conventional variety. Thus the dis
cussions which follow will be dominated by illustrations from Chess,
" Matching Pennies," Poker, Bridge, etc., and not from the structure of
cartels, markets, oligopolies, etc.
At this point it is also opportune to recall that we consider all trans
actions at the end of a game as purely monetary ones i.e. that we ascribe
to all players an exclusively monetary profit motive. The meaning of this
in terms of the utility concept was analyzed in 2.1.1. in Chapter I. For the
present particularly for the " zerosum twoperson games" to be discussed
1 The only fully satisfactory "proof" of this assertion lies in the construction of a
complete theory of all zerosum twoperson games, without use of those devices. This
will be done in Chapter III, the decisive result being contained in 17. It ought to be
clear by common sense, however, that "understandings" and "coalitions" can have no
role here: Any such arrangement must involve at least two players hence in this case all
players for whom the sum of payments is identically zero. I.e. th re are no opponents
left and no possible objectives.
48 DESCRIPTION OF GAMES OF STRATEGY
first (cf. the discussion of 5.2.1.) it is an absolutely necessary simplifi
cation. Indeed, we shall maintain it through most of the theory, although
variants will be examined later on. (Cf. Chapter XII, in particular 66.)
6.2.3. Our first task is to give an exact definition of what constitutes a
game. As long as the concept of a game has not been described with
absolute mathematical combinatorial precision, we cannot hope to
give exact and exhaustive answers to the questions formulated at the end
of 5.2.1. Now while our first objective is as was explained in 5.2.1. the
theory of zerosum twoperson games, it is apparent that the exact descrip
tion of what constitutes a game need not be restricted to this case. Conse
quently we can begin with the description of the general nperson game.
In giving this description we shall endeavor to do justice to all conceivable
nuances and complications which can arise in a game insofar as they are
not of an obviously inessential character. In this way we reach in several
successive steps a rather complicated but exhaustive and mathematically
precise scheme. And then we shall see that it is possible to replace this
general scheme by a vastly simpler one, which is nevertheless fully and
rigorously equivalent to it. Besides, the mathematical device which
permits this simplification is also of an immediate significance for our
problem: It is the introduction of the exact concept of a strategy.
It should be understood that the detour which leads to the ultimate,
simple formulation of the problem, over considerably more complicated
ones is not avoidable. It is necessary to show first that all possible
complications have been taken into consideration, and that the mathe
matical device in question does guarantee the equivalence of the involved
setup to the simple.
All this can and must be done for all games, of any number of play
ers. But after this aim has been achieved in entire generality, the next
objective of the theory is as mentioned above to find a complete solution
for the zerosum twoperson game. Accordingly, this chapter will deal
with all games, but the next one with zerosum twoperson games only. After
they are disposed of and some important examples have been discussed, we
shall begin to reextend the scope of the investigation first to zerosum n
person games, and then to all games.
Coalitions and compensations will only reappear during this latter stage.
6. The Simplified Concept of a Game
6.1. Explanation of the Termini Technici
6.1. Before an exact definition of the combinatorial concept of a game
can be given, we must first clarify the use of some termini. There are
some notions which are quite fundamental for the discussion of games,
but the use of which in everyday language is highly ambiguous. The words
which describe them are used sometimes in one sense, sometimes in another,
and occasionally worst of all as if they were synonyms. We must
SIMPLIFIED CONCEPT OF A GAME 49
therefore introduce a definite usage of termini technici, and rigidly adhere
to it in all that follows.
First, one must distinguish between the abstract concept of a game,
and the individual plays of that game. The game is simply the totality
of the rules which describe it. Every particular instance at which the
game is played in a particular way from beginning to end, is a play. 1
Second, the corresponding distinction should be made for the moves,
which are the component elements of the game. A move is the occasion
of a choice between various alternatives, to be made either by one of the
players, or by some device subject to chance, under conditions precisely
prescribed by the rules of the game. The move is nothing but this abstract
" occasion/' with the attendant details of description, i.e. a component
of the game. The specific alternative chosen in a concrete instance i.e.
in a concrete play is the choice. Thus the moves are related to the
choices in the same way as the game is to the play. The game consists
of a sequence of moves, and the play of a sequence of choices. 2
Finally, the rules of the game should not be confused with the strategies
of the players. Exact definitions will be given subsequently, but the
distinction which we stress must be clear from the start. Each player
selects his strategy i.e. the general principles governing his choices freely.
While any particular strategy may be good or bad provided that these
concepts can be interpreted in an exact sense (cf. 14.5. and 17.817.10.)
it is within the player's discretion to use or to reject it. The rules of the
game, however, are absolute commands. If they are ever infringed, then
the whole transaction by definition ceases to be the game described by those
rules. In many cases it is even physically impossible to violate them. 3
6.2. The Elements of the Game
6.2.1. Let us now consider a game F of n players who, for the sake of
brevity, will be denoted by 1, , n. The conventional picture provides
that this game is a sequence of moves, and we assume that both the number
and the arrangement of these moves is given ab initio. We shall see later
that these restrictions are not really significant, and that they can be
removed without difficulty. For the present let us denote the (fixed)
number of moves in F by v this is an integer v = 1, 2, . The moves
themselves we denote by 3TCi, , 3TC,, and we assume that this is the
chronological order in which they are prescribed to take place.
1 In most games everyday usage calls a play equally a game ; thus in chess, in poker,
in many sports, etc. In Bridge a play corresponds to a "rubber," in Tennis to a "set,"
but unluckily in these games certain components of the play are again called "games."
The French terihinology is tolerably unambiguous: "game" * "jeu," "play"
"partie."
1 In this sense we would talk in chess of the first move, and of the choice "E2E4."
1 E.g. : In Chess the rules of the game forbid a player to move his king into a position
of "check." This is a prohibition in the same absolute sense in which he may not move a
pawn sideways. But to move the king into a position where the opponent can "check
mate" him at the next move is merely unwise, but not forbidden.
50 DESCRIPTION OF GAMES OF STRATEGY
Every move 3TC,, K = 1, , v, actually consists of a number of
alternatives, among which the choice which constitutes the move 9TC,
takes place. Denote the number of these alternatives by a< and the
alternatives themselves by GL K (l), , &()
The moves are of two kinds. A move of the first kind, or a personal
move, is a choice made by a specific player, i.e. depending on his free decision
and nothing else. A move of the second kind, or a chance move, is a choice
depending on some mechanical device, which makes its outcome fortuitous
with definite probabilities. 1 Thus for every personal move it must be
specified which player's decision determines this move, whose move it is.
We denote the player in question (i.e. his number) by k K . So fc, = 1, ,
n. For a chance move we put (conventionally) fc< = 0. In this case the
probabilities of the various alternatives a(l), , &(<*) must be given.
We denote these probabilities by p(l), , p(a) respectively. 2
6.2.2. In a move 3TC, the choice consists of selecting an alternative
(*(!), ' , Ct(<O> i.e. its number !,,. We denote the number
so chosen by <r,. Thus this choice is characterized by a number <r = 1,
, a,. And the complete play is described by specifying all choices,
corresponding to all moves OTli, , 9R,. I.e. it is described by a sequence
Now the rule of the game T must specify what the outcome of the play
is for each player k = !, , n, if the play is described by a given sequence
<TI, ov I.e. what payments every player receives when the play is
completed. Denote the payment to the player k by $ k ($k > if k receives
a payment, $ k < if he must make one, $* = if neither is the case).
Thus each $* must be given as a function of the <TI, , <r,:
$k = SF*(<ri, , <r v ), k = 1, , n.
We emphasize again that the rules of the game T specify the function
$k(*i) ' ' ' , <r>) only as a function, 3 i.e. the abstract dependence of each
5u on the variables <TI, , <r . But all the time each cr, is a variable,
with the domain of variability !,,,. A specification of particular
numerical values for the <r<, i.e. the selection of a particular sequence <n,
, <r,, is no part of the game T. It is, as we pointed out above, the
definition of a play.
1 E.g., dealing cards from an appropriately shuffled deck, throwing dice, etc. It is
even possible to include certain games of strength and skill, where "strategy" plays a role,
e.g. Tennis, Football, etc. In these the actions of the players are up to a certain point
personal moves i.e. dependent upon their free decision and beyond this point chance
moves, the probabilities being characteristics of the player in question.
Since the p(l), , p K (a K ) are probabilities, they are necessarily numbers *z 0.
Since they belong to disjunct but exhaustive alternatives, their sum (for a fixed K) must
be one. I.e. :
3 For a systematic exposition of the concept of a function cf. 13.1.
SIMPLIFIED CONCEPT OF A GAME 51
6.3. Information and Preliminarity
6.3.1. Our description of the game F is not yet complete. We have
failed to include specifications about the state of information of every
player at each decision which he has to make, i.e. whenever a personal
move turns up which is his move. Therefore we now turn to this aspect
of the matter.
This discussion is best conducted by following the moves 9fTli, , 2fTl,,
as the corresponding choices are made.
Let us therefore fix our attention on a particular move 3fH. If this
3TC< is a chance move, then nothing more need be said: the choice is decided
by chance; nobody's will and nobody's knowledge of other things can
influence it. But if $TC, is a personal move, belonging to the player fc, then
it is quite important what k K 'a state of information is when he forms his
decision concerning TO, i.e. his choice of <r.
The only things he can be informed about are the choices corresponding
to the moves preceding 9fR the moves SfTCi, , 9fTC_i. I.e. he may know
the values of <TI, , <r K .i. But he need not know that much. It is an
important peculiarity of F, just how much information concerning (TI, ,
<r K i the player k K is granted, when he is called upon to choose <? K . We
shall soon show in several examples what the nature of such limitations is.
The simplest type of rule which describes k K 's state of information at 3fn,
is this: a set A consisting of some numbers from among X = 1, , K 1,
is given. It is specified that k K knows the values of the a\ with X belong
ing to A, and that he is entirely ignorant of the <r\ with any other X.
In this case we shall say, when X belongs to A,, that X is preliminary
to /c. This implies X = 1, , K 1, i.e. X < K, but need not be implied
by it. Or, if we consider, instead of X, K, the corresponding moves SfTlx, 3Tl:
Preliminarity implies anteriority, 1 but need not be implied by it.
6.3.2. In spite of its somewhat restrictive character, this concept of
preliminarity deserves a closer inspection. In itself, and in its relationship
to anteriority (cf. footnote 1 above), it gives occasion to various combina
torial possibilities. These have definite meanings in those games in which
they occur, and we shall now illustrate them by some examples of particu
larly characteristic instances.
6.4. Preliminarity, Transitivity, and Signaling
6.4.1. We begin by observing that there exist games in which pre
liminarity and anteriority are the same thing. I.e., where the players k K
who makes the (personal) move 9TC, is informed about the outcome of the
choices of all anterior moves 3Tli, * , 9Tl<_ i. Chess is a typical representa
tive of this class of games of "perfect" information. They are generally
considered to be of a particularly rational character. We shall see in 15.,
specifically in 15.7., how this can be interpreted in a precise way.
1 In time, X < K means that 9H\ occurs before WL K .
52 DESCRIPTION OF GAMES OF STRATEGY
Chess has the further feature that all its moves are personal. Now it
is possible to conserve the firstmentioned property the equivalence of
preliminarity and anteriority even in games which contain chance moves.
Backgammon is an example of this. 1 Some doubt might be entertained
whether the presence of chance moves does not vitiate the " rational char
acter" of the game mentioned in connection with the preceding examples.
We shall see in 15.7.1. that this is not so if a very plausible interpretation
of that " rational character" is adhered to. It is not important whether
all moves are personal or not; the essential fact is that preliminarity and
anteriority coincide.
6.4.2. Let us now consider games where anteriority does not imply
preliminarity. I.e., where the player fc who makes the (personal) move 9TC,
is not informed about everything that happened previously. There is a
large family of games in which this occurs. These games usually con
tain chance moves as well as personal moves. General opinion considers
them as being of a mixed character: while their outcome is definitely
dependent on chance, they are also strongly influenced by the strategic
abilities of the players.
Poker and Bridge are good examples. These two games show, further
more, what peculiar features the notion of preliminarity can present,
once it has been separated from anteriority. This point perhaps deserves
a little more detailed consideration.
Anteriority, i.e. the chronological ordering of the moves, possesses
the property of transitivity. 2 Now in the present case, preliminarity
need not be transitive. Indeed it is neither in Poker nor in Bridge, and the
conditions under which this occurs are quite characteristic.
Poker: Let 3TC M be the deal of his "hand" to player 1 a chance move;
3Tlx the first bid of player 1 a personal move of 1 ; 9fTC the first (subsequent)
bid of player 2 a personal move of 2. Then 3flfl M is preliminary to 9fTlx and
3Tlx to 3fll but 2fTl M is not preliminary to 3TC,. 3 Thus we have intransitivity,
but it involves both players. Indeed, it may first seem unlikely that
preliminarity could in any game be intransitive among the personal moves
of one particular player. It would require that this player should "forget"
between the moves 9fTl\ and 3TC, the outcome of the choice connected with
3TV and it is difficult to see how this "forgetting" could be achieved, and
1 The chance moves in Backgammon are the dice throws which decide the total num
ber of steps by which each player's men may alternately advance. The personal moves
are the decisions by which each player partitions that total number of steps allotted to
him among his individual men. Also his decision to double the risk, and his alternative
to accept or to give up when the opponent doubles. At every move, however, the out
come of the choices of all anterior moves are visible to all on the board.
2 I.e. : If 3TC M is anterior to 9Tlx and 311 x to 9K then 3TC M is anterior to 9TC. Special situa
tions where the presence or absence of transitivity was of importance, were analyzed in
4.4.2., 4.6.2. of Chapter I in connection with the relation of domination.
3 I.e., 1 makes his first bid knowing his own "hand"; 2 makes his first bid knowing
1's (preceding) first bid; but at the same time 2 is ignorant of 1's "hand."
4 We assume that 9TC M is preliminary to 9fR\ and 3Tlx to 3TZ* but 9TC M not to 3TC*.
SIMPLIFIED CONCEPT OF A GAME 53
even enforced 1 Nevertheless our next example provides an instance of
just this.
Bridge: Although Bridge is played by 4 persons, to be denoted by
A,B,C,D, it should be classified as a twoperson game. Indeed, A and C
form a combination which is more than a voluntary coalition, and so do
B and D. For A to cooperate with B (or D) instead of with C would be
" cheating," in the same sense in which it would be "cheating" to look into
JB's cards or failing to follow suit during the play. I.e. it would be a viola
tion of the rules of the game. If three (or more) persons play poker, then
it is perfectly permissible for two (or more) of them to cooperate against
another player when their interests are parallel but in Bridge A and C
(and similarly B and D) must cooperate, while A and B are forbidden to
cooperate. The natural way to describe this consists in declaring that A
and C are really one player 1, and that B and D are really one player 2.
Or, equivalently: Bridge is a twoperson game, but the two players 1 and 2
do not play it themselves. 1 acts through two representatives A and C and
2 through two representatives B and D.
Consider now the representatives of 1, A and C. The rules of the game
restrict communication, i.e. the exchange of information, between them.
E.g.: let 9TC M be the deal of his "hand" to A a chance move; 9TCx the first
card played by A a personal move of 1 ; 3TI, the card played into this trick
by C a personal move of 1. Then 9TI M is preliminary to 3fllx and 2fllx to 9TC,
but 2(TI M is not preliminary to 9TC,. 1 Thus we have again intransitivity, but
this time it involves only one player. It is worth noting how the necessary
"forgetting" of 3TI M between 3Tlx and 3Tl was achieved by "splitting the
personality" of 1 into A and C.
6.4.3. The above examples show that intransitivity of the relation of
preliminarity corresponds to a very well known component of practical
strategy: to the possibility of "signaling." If no knowledge of 3fTC M is
available at 3fTC, but if it is possible to observe 9Hx's outcome at 3TC, and 3Tlx
has been influenced by 9fTl M (by knowledge about 9TC/S outcome), then
OTx is really a signal from 9TI M to 3TC, a device which (indirectly) relays
information. Now two opposite situations develop, according to whether
3flflx and 9TC, are moves of the same player, or of two different players.
In the first case which, as we saw, occurs in Bridge the interest of
the player (who is fc x = fc,). lies in promoting the "signaling," i.e. the
spreading of information "within his own organization." This desire
finds its realization in the elaborate system of "conventional signals" in
Bridge. 2 These are parts of the strategy, and not of the rules of the game
1 I.e. A plays his first card knowing his own "hand"; C contributes to this trick know
ing the (initiating) card played by A ; but at the same time C is ignorant of A's "hand/ 1
1 Observe that this " signaling " is considered to be perfectly fair in Bridge if it is
carried out by actions which are provided for by the rules of the game. E.g. it is correct
for A and C (the two components of player 1, cf. 6.4.2.) to agree before the play begins!
that an "original bid" of two trumps " indicates " a weakness of the other suits. But
it is incorrect i.e. "cheating 11 to indicate a weakness by an inflection of the voice at
bidding, or by tapping on the table, etc.
54 DESCRIPTION OF GAMES OF STRATEGY
(cf. 6.1.), and consequently they may vary, 1 while the game of Bridge
remains the same.
In the second case which, as we saw, occurs in Poker the interest
of the player (we now mean k\, observe that here k\ ^ k K ) lies in preventing
this "signaling," i.e. the spreading of information to the opponent (&<).
This is usually achieved by irregular and seemingly illogical behavior
(when making the choice at 3Tlx) this makes it harder for the opponent
to draw inferences from the outcome of 3fTlx (which he sees) concerning the
outcome of 91Z M (of which he has no direct news). I.e. this procedure makes
the "signal" uncertain and ambiguous. We shall see in 19.2.1. that this is
indeed the function of "bluffing" in Poker. 2
We shall call these two procedures direct and inverted signaling. It ought
to be added that inverted signaling i.e. misleading the opponent occurs
in almost all games, including Bridge. This is so since it is based on the
intransitivity of preliminarity when several players are involved, which is
easy to achieve. Direct signaling, on the other hand, is rarer; e.g. Poker
contains no vestige of it. Indeed, as we pointed out before, it implies the
intransitivity of preliminarity when only one player is involved i.e. it
requires a wellregulated " f orgetf ulness " of that player, which is obtained in
Bridge by the device of "splitting the player up" into two persons.
At any rate Bridge and Poker seem to be reasonably characteristic
instances of these two kinds of intransitivity of direct and of inverted
signaling, respectively.
Both kinds of signaling lead to a delicate problem of balancing in actual
playing, i.e. in the process of trying to define "good," "rational" playing.
Any attempt to signal more or to signal less than "unsophisticated" playing
would involve, necessitates deviations from the "unsophisticated" way of
playing. And this is usually possible only at a definite cost, i.e. its direct
consequences are losses. Thus the problem is to adjust this "extra" signal
ing so that its advantages by forwarding or by withholding information
overbalance the losses which it causes directly. One feels that this involves
something like the search for an optimum, although it is by no means clearly
defined. We shall see how the theory of the twoperson game takes care
already of this problem, and we shall discuss it exhaustively in one charac
teristic instance. (This is a simplified form of Poker. Cf. 19.)
Let us observe, finally, that all important examples of intransitive
preliminarity are games containing chance moves. This is peculiar, because
there is no apparent connection between these two phenomena. 3  4 Our
1 They may even be different for the two players, i.e. for A and C on one hand and
B and D on the other. But "within the organization' 1 of one player, e.g. for A and C,
they must agree.
1 And that "bluffing" is not at all an attempt to secure extra gains in any direct
sense when holding a weak hand. Cf. loc. cit.
* Cf . the corresponding question when preliminarity coincides with anteriority, and
thus is transitive, as discussed in 6.4.1. As mentioned there, the presence or absence of
chance moves is immaterial in that case.
4 "Matching pennies" is an example which has a certain importance in this connec
tion. This and other related games will be diacussed in 18.
(COMPLETE CONCEPT OF A GAME 55
subsequent analysis will indeed show that the presence or absence of chance
moves scarcely influences the essential aspects of the strategies in this
situation.
7. The Complete Concept of a Game
7.1. Variability of the Characteristics of Each Move
7.1.1. We introduced in 6.2.1. the a* alternatives &*(!), , Gt<(a)
of the move STl*. Also the index fc* which characterized the move as a
personal or chance one, and in the first case the player whose move it is;
and in the second case the probabilities p*(l), * * , P K (<X K ) of the above alter
natives. We described in 6.3.1. the concept of preliminarity with the help
of the sets A,, this being the set of all X (from among the X = 1, ,* !)
which are preliminary to K. We failed to specify, however, whether all
these objects a*, k f , A and the Ct(<r), p*(a) for o = 1, , a* depend
solely on K or also on other things. These " other things " can, of course,
only be the outcome of the choices corresponding to the moves which are
anterior to 3fn. I.e. the numbers <n, , <r_i. (Cf. 6.2.2.)
This dependence requires a more detailed discussion.
First, the dependence of the alternatives (%((?) themselves (as distin
guished from their number a!) on <n, , o,_i is immaterial. We may
as well assume that the choice corresponding to the move 311, is made not
between the Ct,(<r) themselves, but between their numbers a. In fine, it is
only the <r of 311*, i.e. cr, which occurs in the expressions describing the out
come of the play, i.e. in the functions SF*(<M, , cr*), k = 1, , n. 1
(Cf. 6.2.2.)
Second, all dependences (on en, , (7<_i) which arise when 3TI* turns
out to be a chance move i.e. when k K = (cf. the end of 6.2.1.) cause no
complications. They do not interfere with our analysis of the behavior of
the players. This disposes, in particular, of all probabilities p(<r), since
they occur only in connection with chance moves. (The A,, on the other
hand, never occur in chance moves.)
Third, we must consider the dependences (on <n, , <r_i) of the
k K , A, when 2KI, turns out to be a personal move. 2 Now this possibility
is indeed a source of complications. And it is a very real possibility. 3 The
reason is this.
1 The form and nature of the alternatives GL K (<r) offered at 3TI, might, of course, convey
to the player k K (if WL K is a personal move) some information concerning the anterior
<TX, , <r_i values, if the a(<r) depend on those. But any such information should
be specified separately, as information available to A; at 2fTl*. We have disfcussed the
simplest schemes concerning the subject of information in 6.3.1., and shall complete the
discussion in 7.1.2. The discussion of , k K , A*, which follows further below, is charac
teristic also as far as the role of the a,t(<r) as possible sources of information is concerned.
8 Whether this happens for a given K, will itself depend on k K and hence indirectly
on <n, , <7i since it is characterized by k K & (cf. the end of 6.2.1.).
3 E.g.: In Chess the number of possible alternatives at 9R K depends on the positions
of the men, i.e. the previous course of the play. In Bridge the player who plays the first
56 DESCRIPTION OF GAMES OF STRATEGY
7.1.2. The player k K must be informed at 311, of the values of a,, k K ,
A* since these are now part of the rules of the game which he must observe.
Insofar as they depend upon ai, , <r_i, he may draw from them certain
conclusions concerning the values of <TI, , <7_i. But he is supposed
to know absolutely nothing concerning the <r\ with X not in A K ! It is hard
to see how conflicts can be avoided.
To be precise: There is no conflict in this special case: Let A, be inde
pendent of all (T\,    , <7\_], and let K , k K depend only on the <r\ with X in A*.
Then the player k K can certainly not get any information from a K , k K , A
beyond what he knows anyhow (i.e. the values of the <r\ with X in A*). If
this is the case, we say that we have the special form of dependence.
But do we always have the special form of dependence? To take an
extreme case: What if A K is always empty i.e. k K expected to be completely
uninformed at 3TC, and yet e.g. a* explicitly dependent on some of the
<n, , <r._i!
This is clearly inadmissible. We must demand that all numerical con
clusions which can be derived from the knowledge of a*, & A K , must be
explicitly and ab initio specified as information available to the player k K
at 3TC,. It would be erroneous, however, to try to achieve this by including
in A* the indices X of all these ox, on which a,, fc,, A, explicitly depend. In
the first place great care must be exercised in order to avoid circularity in
this requirement, as far as A is concerned. 1 But even if this difficulty does
not arise, because A, depends only on K and not on cri, , o^i i.e. if the
information available to every player at every moment is independent of
the previous course of the play the above procedure may still be inadmis
sible. Assume, e.g., that a K depends on a certain combination of some <r\
from among the X = l, ,* 1, and that the rules of the game do
indeed provide that the player k K at y(l K should know the value of this com
bination, but that it does not allow him to know more (i.e. the values of the
individual ai, , ovi). E.g.: He may know the value of cr M + <r\ where
M, X are both anterior to K (ju, X < K), but he is not allowed to know the
separate values of <T M and a\.
One could try various tricks to bring back the above situation to our
earlier, simpler, scheme, which describes fc/s state of information by means
of the set A,. 2 But it becomes completely impossible to disentangle the
various components of fc/s information at 3TI,, if they themselves originate
from personal moves of different players, or of the same player but in
card to the next trick, i.e. k K at 911*, is the one who took the last trick, i.e. again dependent
upon the previous course of the play. In some forms of Poker, and some other related
games, the amount of information available to a player at a given moment, i.e. A* at 9H K ,
depends on what he and the others did previously.
1 The <r\ on which, among others, A, depend are only defined if the totality of all A,
for all sequences <n,  , <r<_i, is considered. Should every A* contain these X?
2 In the above example one might try to replace the move 9TI M by a new one in which
not o> is chosen, but <? f *\. SHI* would remain unchanged. Then k* at 9R K would be
informed about the outcome of the choice connected with the new 3TCu onlv.
COMPLETE CONCEPT OF A GAME 57
different stages of information. In our above example this happens if
fc M 5^ fc\, or if fc^ = fc x but the state of information of this player is not the
same at 311,, and at SfTCx. 1
7.2. The General Description
7.2.1. There are still various, more or less artificial, tricks by which one
could try to circumvent these difficulties. But the most natural procedure
seems to be to admit them, and to modify our definitions accordingly.
This is done by sacrificing the A, as a means of describing the state of
information. Instead, we describe the state of information of the player fc,
at the time of his personal move 311, explicitly : By enumerating those func
tions of the variable <r\ anterior to this move i.e. of the <TI, , <r_i the
numerical values of which he is supposed to know at this moment. This is
a system of functions, to be denoted by <,.
So $, is a set of functions
Since the elements of < describe the dependence on <TI, , <r_i, so <, itself
is fixed, i.e. depending on K only. 2 , k K may depend on <n, , <r_i, and
since their values are known to k K at 91Z,, these functions
must belong to $. Of course, whenever it turns out that k K = (for a
special set of <ri, , cr_i values), then the move 3fTl is a chance one (cf.
above), and no use will be made of <!>* but this does not matter.
Our previous mode of description, with the A*, is obviously a special
case of the present one, with the $*. 8
7.2.2. At this point the reader may feel a certain dissatisfaction about
the turn which the discussion has taken. It is true that the discussion was
deflected into this direction by complications which arose in actual and
typical games (cf. footnote 3 on p. 55). But the necessity of replacing
the A K by the <$ originated in our desire to maintain absolute formal
(mathematical) generality. These decisive difficulties, which caused us
to take this step (discussed in 7.1.2., particularly as illustrated by the
footnotes there) were really extrapolated. I.e. they were not characteristic
1 In the instance of footnote 2 on p. 56, this means : If fr M ?* fc\, there is no player to whom
the new move 3!Z M (where <r M 4 <rx is chosen, and which ought to be personal) can be
attributed. If & M = k\ but the state of information varies from 3TI M to $TCx, then no state
of information can be satisfactorily prescribed for the new move 9Tt M .
1 This arrangement includes nevertheless the possibility that the state of information
expressed by * depends on <n, , <r K _\. This is the case if, e.g., all functions
h(vi, , <r_0 of <J> show an explicit dependence on o> for one set of values of <r\, while
being independent of o> for other values of <r\. Yet * is fixed.
1 If ** happens to consist of all functions of certain variables <r\ say of those for
which X belongs to a given set M* and of no others, then the $* description specializes
back to the A one: A, being the above set M. But we have seen that we cannot, in
general, count upon the existence of such a set.
58 DESCRIPTION OF GAMES OF STRATEGY
of the original examples, which are actual games. (E.g. Chess and Bridge
can be described with the help of the A*.)
There exist games which require discussion by means of the $. But in
most of them one could revert to the A, by means of various extraneous
tricks and the entire subject requires a rather delicate analysis upon which
it does not seem worth while to enter here. 1 There exist unquestionably
economic models where the $ are necessary. 2
The most important point, however, is this.
In pursuit of the objectives which we have set ourselves we must achieve
the certainty of having exhausted all combinatorial possibilities in connec
tion with the entire interplay of the various decisions of the players, their
changing states of information, etc. These are problems, which have been
dwelt upon extensively in economic literature. We hope to show that they
can be disposed of completely. But for this reason we want to be safe
against any possible accusation of having overlooked some essential possi
bility by undue specialization.
Besides, it will be seen that all the formal elements which we are intro
ducing now into the discussion do not complicate it ultima analyst. I.e.
they complicate only the present, preliminary stage of formal descrip
tion. The final form of the problem turns out to be unaffected by them.
(Cf. 11.2.)
7.2.3. There remains only one more point to discuss: The specializing
assumption formulated at the very start of this discussion (at the beginning
of 6.2.1.) that both the number and the arrangement of the moves are given
(i.e. fixed) ab initio. We shall now see that this restriction is not essential.
Consider first the "arrangement" of the moves. The possible varia
bility of the nature of each move i.e. of its fc has already received full
consideration (especially in 7.2.1.). The ordering of the moves SHI,, k = 1,
, v, was from the start simply the chronological one. Thus there is
nothing left to discuss on this score.
Consider next the number of moves v. This quantity too could be
variable, i.e. dependent upon the course of the play. 3 In describing this
variability of v a certain amount of care must be exercised.
1 We mean card games where players may discard some cards without uncovering
them, and are allowed to take up or otherwise use openly a part of their discards later.
There exists also a game of doubleblind Chess sometimes called " Kriegsspiel " which
belongs in this class. (For its description cf. 9.2.3. With reference to that description:
Each player knows about the " possibility" of the other's anterior choices, without
knowing those choices themselves and this "possibility" is a function of all anterior
choices.)
1 Let a participant be ignorant of the full details of the previous actions of the others,
but let him be informed concerning certain statistical resultants of those actions.
3 It is, too, in most games: Chess, Backgammon, Poker, Bridge. In the case of Bridge
this variability is due first to the variable length of the "bidding" phase, and second to
the changing number of contracts needed to make a "rubber" (i.e. a play). Examples
of games with a fixed v are harder to find: we shall see that we can make v fixed in every
game by an artifice, but games in which v is ab initio fixed are apt to be monotonous.
COMPLETE CONCEPT OF A GAME 59
The course of the play is characterized by the sequence (of choices)
*i, ' ' ' , <r (cf. 6.2.2.). Now one cannot state simply that v may be a
function of the variables <TI, , <r,, because the full sequence <TI, , a,
cannot be visualized at all, without knowing beforehand what its length v
is going to be. 1 The correct formulation is this: Imagine that the variables
0*1, 0*2, ffs, * are chosen one after the other. 2 If this succession of choices
is carried on indefinitely, then the rules of the game must at some place v
stop the procedure. Then v for which the stop occurs will, of course, depend
on all the choices up to that moment. It is the number of moves in that
particular play.
Now this stop rule must be such as to give a certainty that every con
ceivable play will be stopped sometime. I.e. it must be impossible to
arrange the successive choices of a\, <r 2 , <r 3 , in such a manner (subject
to the restrictions of footnote 2 above) that the stop never comes. The
obvious way to guarantee this is to devise a stop rule for which it is
certain that the stop will come before a fixed moment, say *>*. I.e. that
while v may depend on oi, a*, a^  , it is sure to be v ^ v* where v*
does not depend on <TI, <r 2 , <r 3 , . If this is the case we say that the
stop rule is bounded by v*. We shall assume for the games which we con
sider that they have stop rules bounded by (suitable, but fixed) numbers
* 3,4
1 I.e. one cannot say that the length of the game depends on all choices made in con
nection with all moves, since it will depend on the length of the game whether certain
moves will occur at all. The argument is clearly circular.
* The domain of variability of <r\ is 1, , on. The domain of variability of <TI is
1, , at, and may depend on <n: a 2 = az(<ri). The domain of variability of <r is
1, , ai, and may depend on <TI, <r z : i ai(<n, <rt). Etc., etc.
8 This stop rule is indeed an essential part of every game. In most games it is easy
to find v's fixed upper bound v*. Sometimes, however, the conventional form of the
rules of the game does not exclude that the play might under exceptional conditions go
on ad infinitum. In all these cases practical safeguards have been subsequently incor
porated into the rules of the game with the purpose of securing the existence of the
bound v*. It must be said, however, that these safeguards are not always absolutely
effective although the intention is clear in every instance, and even where exceptional
infinite plays exist they are of little practical importance. It is nevertheless quite
instructive, at least from a mathematical point of view, to discuss a few typical examples.
We give four examples, arranged according to decreasing effectiveness.
ficarte": A play is a "rubber," a "rubber" consists of winning two "games" out of
three (cf. footnote 1 on p. 49), a "game" consists of winning five "points," and each
"deal" gives one player or the other one or two points. Hence a "rubber" is complete
after at most three "games," a "game" after at most nine "deals," and it is easy to
verify that a "deal" consists of 13, 14 or 18 moves. Hence v*  3 ^9 18  486.
Poker: A priori two players could keep "overbidding" each other ad infinitum. It is
therefore customaiy to add to the rules a proviso limiting the permissible number of
"overbids." (The amounts of the bids are also limited, so as to make the number of
alternatives a at these personal moves finite.) This of course secures a finite v*.
Bridge: The play is a "rubber" and this could go on forever if both sides (players)
invariably failed to make their contract. It is not inconceivable that the side which is in
danger of losing the "rubber," should in this way permanently prevent a completion of
the play by absurdly high bids. This is not done in practice, but there is nothing explicit
in the rules of the game to prevent it. In theory, at any rate, some stop rule should be
introduced in Bridge.
Chess: It is easy to construct sequences of choices (in the usual terminology:
60 DESCRIPTION OF GAMES OF STRATEGY
Now we can make use of this bound v* to get entirely rid of the variabil
ity of v.
This is done simply by extending the scheme of the game so that there
are always v* moves 2fEi, , 9TI,*. For every sequence <TI, <r, 03,
everything is unchanged up to the move 3TI,, and all moves beyond 2fTl, are
"dummy moves." I.e. if we consider a move 3TC,, K = 1, , v*, for a
sequence oi, era, <TS, for which v < K, then we make 9TC a chance move
with one alternative only 1 i.e. one at which nothing happens.
Thus the assumptions made at the beginning of 6.2.1. particularly
that v is given ab initio are justified ex post.
8. Sets and Partitions
8.1. Desirability of a Settheoretical Description of a Game
8.1. We have obtained a satisfactory and general description of the
concept of a game, which could now be restated with axiomatic precision
and rigidity to serve as a basis for the subsequent mathematical discussion.
It is worth while, however, before doing that, to pass to a different formula
tion. This formulation is exactly equivalent to the one which we reached
in the preceding sections, but it is more unified, simpler when stated in a
general form, and it leads to more elegant and transparent notations.
In order to arrive at this formulation we must use the symbolism of
the theory of sets and more particularly of partitions more extensively
than we have done so far. This necessitates a certain amount of explana
tion and illustration, which we now proceed to give.
"moves") particularly in the "end game" which can go on ad infinitum without ever
ending the play (i.e. producing a "checkmate"). The simplest ones are periodical, i.e.
indefinite repetitions of the same cycle of choices, but there exist nonperiodical ones as
well. All of them offer a very real possibility for the player who is in danger of losing to
secure sometimes a " tie." For this reason various "tie rules " i.e. stop rules are in use
just to prevent that phenomenon.
One well known "tie rule" is this: Any cycle of choices (i.e. "moves"), when three
times repeated, terminates the play by a "tie." This rule excludes most but not all
infinite sequences, and hence is really not effective.
Another "tie rule" is this: If no pawn has been moved and no officer taken (these
are "irreversible" operations, which cannot be undone subsequently) for 40 moves, then
the play is terminated by a "tie." It is easy to see that this rule is effective, although the
v* is enormous.
4 From a purely mathematical point of view, the following question could be asked :
Let the stop rule be effective in this sense only, that it is impossible so to arrange the
successive choices <TI, at, at, that the stop never comes. I.e. let there always be a
finite v dependent upon <n, <rj, <rj, . Does this by itself secure the existence of a
fixed, finite v* bounding the stop rule? I.e. such that all v ^ v*?
The question is highly academic since all practical game rules aim to establish a v*
directly. (Cf., however, footnote 3 above.) It is nevertheless quite interesting
mathematically.
The answer is ".Yes," i.e. v* always exists. Cf. e.g. D. K&nig: tJber eine Schluss
weise aus dem Endlichen ins Unendliche, Acta Litt. ac Scient. Univ. Szeged, Sect. Math.
Vol. III/II (1927) pp. 121130; particularly the Appendix, pp. 12&130.
1 This means, of course, that = 1, k K = 0, and p*(l) *= 1.
SETS AND PARTITIONS 61
8.2. Sets, Their Properties, and Their Graphical Representation
8.2.1. A set is an arbitrary collection of objects, absolutely no restriction
being placed on the nature and number of these objects, the elements
of the set in question. The elements constitute and determine the set as
such, without any ordering or relationship of any kind between them. I.e.
if two sets A, B are such that every element of A is also one of B and vice
versa, then they are identical in every respect, A = B. The relationship
of a being an element of the set A is also expressed by saying that a belongs
to A. 1
We shall be interested chiefly, although not always, in finite sets only,
i.e. sets consisting of a finite number of elements.
Given any objects a, 0, 7, we denote the set of which they are the
elements by (a, 0, 7, ) It is also convenient to introduce a set which
contains no elements at all, the empty set. 2 We denote the empty set by .
We can, in particular, form sets with precisely one element, oneelement sets.
The oneelement set (a), and its unique element a, are not the same thing
and should never be confused. 3
We reemphasize that any objects can be elements of a set. Of course
we shall restrict ourselves to mathematical objects. But the elements
can, for instance, perfectly well be sets themselves (cf. footnote 3), thus
leading to sets of sets, etc. These latter are sometimes called by some other
equivalent name, e.g. systems or aggregates of sets. But this is not
necessary.
8.2.2. The main concepts and operations connected with sets are these:
(8:A:a) A is a subset of B, or B a superset of A, if every element of
A is also an element of B. In symbols: A B or B 2 A. A\s
a proper subset of B, or B a proper superset of A, if the above is
true, but if B contains elements which are not elements of A.
In symbols: A c B or B => A. We see: If A is a subset of B and
B is a subset of A , then A = B. (This is a restatement of the
principle formulated at the beginning of 8.2.1.) Also: A is a
proper subset of B if and only if A is a subset of B without
A = B.
1 The mathematical literature of the theory of sets is very extensive. We make no
use of it beyond what will be said in the text. The interested reader will find more
information on set theory in the good introduction: A. Fraenkel: Einleitung in die Men
genlehre, 3rd Edit. Berlin 1928; concise and technically excellent :F. Hausdorff: Mengen
lehre, 2nd Edit. Leipzig 1927.
* If two sets A, B are both without elements, then we may say that they have the
same elements. Hence, by what we said above, A ~ B. I.e. there exists only one
empty set.
This reasoning may sound odd, but it is nevertheless faultless.
8 There are some parts of mathematics where (a) and a can be identified. This is
then occasionally done, but it is an unsound practice. It is certainly not feasible in
general. E.g., let a be something which is definitely not a oneelement set, i.e. a
twoelement set (, 0), or the empty set Q. Then (a) and a must be distinguished, since
(a) is a oneelement set while a is not.
62 DESCRIPTION OF GAMES OF STRATEGY
(8:A:b) The sum of two sets A, B is the set of all elements of A
together with all elements of B, to be denoted by A u B. Simi
larly the sums of more than two sets are formed. 1
(8:A:c) The product, or intersection, of two sets A, B is the set of all
common elements of A and of B, to be denoted by A n B.
Similarly the products of more than two sets are formed. l
(8:A:d) The difference of two sets A, B (A the minuend, B the subtra
hend) is the set of all those elements of A which do not belong to
B, to be denoted by A  B. 1
(8:A:e) When B is a subset of A, we shall also call A B the comple
ment of B in A. Occasionally it will be so obvious which set
A is meant that we shall simply write B and talk about the
complement of B without any further specifications.
(8:A:f) Two sets A, B are disjunct if they have no elements in com
mon, i.e. if A n B = .
(8:A:g) A system (set) & of sets is said to be a system of pairwise dis
junct sets if all pairs of different elements of Q, are disjunct sets,
i.e. if for A, B belonging to a, A ^ B implies A n B = .
8.2.3. At this point some graphical illustrations may be helpful.
We denote the objects which are elements of sets in these considerations
by dots (Figure 1). We denote sets by encircling the dots (elements)
Figure 1.
which belong to them, writing the symbol which denotes the set across
the encircling line in one or more places (Figure 1). The sets A, C in this
figure are, by the way, disjunct, while A, B are not.
1 This nomenclature of sums, products, differences, is traditional. It is based on
certain algebraic analogies which we shall not use here. In fact, the algebra of these
operations U, n, also known as Boolean algebra, has a considerable interest of its own.
Cf. e.g. A. Tarski: Introduction to Logic, New York, 1941. Cf. further Garrett Birkhoff:
Lattice Theory, New York 1940. This book is of wider interest for the understanding of
the modern abstract method. Chapt. VI. deals with Boolean Algebras. Further litera
ture is given there.
SETS AND PARTITIONS
63
With this device we can also represent sums, products and differences of
sets (Figure 2). In this figure neither A is a subset of B nor B one of A,
hence neither the difference A B nor the difference JB A is a comple
ment. In the next figure, however, B is a subset of A, and so A B is the
complement of B in A (Figure 3).
Figure 2.
Figure 3.
8.3. Partitions, Their Properties, and Their Graphical Representation
8.3.1. Let a set ft and a system of sets Ct be given. We say that a
is a partition in ft if it fulfills the two following requirements:
(8:B:a) Every element A of a is a subset of ft, and not empty.
(8:B:b) ft is a system of pairwise disjunct sets.
This concept too has been the subject of an extensive literature. 1
We say for two partitions ft, (B that a is a subpartition of (B, if they fulfill
this condition:
(8:B:c) Every element A of a is a subset of some element B of (B. 2
Observe that if a is a subpartition of (B and (B a subpartition of
a, then a = (B. 3
Next we define:
(8 :B :d) Given two partitions a, (B, we form the system of all those
intersections A n B A running over all elements of ft and B over
1 Cf. G. Birkhoff loc. cit. Our requirements (8:B:a), (8:B:b) are not exactly the
customary ones. Precisely:
Ad (8:B:a): It is sometimes not required that the elements A of a be not empty.
Indeed, we shall have to make one exception in 9.1.3. (cf. footnote 4 on p. 69).
Ad (8:B:b): It is customary to require that the sum of all elements of d be exactly
the set ft. It is more convenient for our purposes to omit this condition.
2 Since Ct, (B are also sets, it is appropriate to compare the subset relation (as far as
Ct, (B are concerned) with the subpartition relation. One verifies immediately that if Ct
is a subset of (B then Q, is also a subpartition of (B, but that the converse statement is not
(generally) true.
3 Proof: Consider an element A of a. It must be subset of an element B of (B, and
B in turn subset of an element Ai of Ct. So A, A\ have common elements all those of
the not empty set A i.e. are not disjunct. Since they both belong to the partition d,
this necessitates A *= AI. So A is a subset of B and B one of A ( = AI). Hence A = J3,
and thus A belongs to (B;
I.e.: a is a subset of (B. (Cf. footnote 2 above.) Similarly (B is a subset of a.
Hence a = (B.
64
DESCRIPTION OF GAMES OF STRATEGY
all those of (B which are not empty. This again is clearly a
partition, the superposition of Ot, (B. 1
Finally, we also define the above relations for two partitions Ct, (B within
a given set C.
(8:B:e) a is a subpartition of (B within C, if every A belonging to ft
which is a subset of C is also subset of some B belonging to (B
which is a subset of C.
(8:B:f) Ot is equal to (B within C if the same subsets of C are elements
of a and of (B.
Clearly footnote 3 on p. 63 applies again, mutatis mutandis. Also,
the above concepts within ft are the same as the original unqualified ones.
Figure 4.
.S3v
Figure 5.
8.3.2. We give again some graphical illustrations, in the sense of 8.2.3.
We begin by picturing a partition. We shall not give the elements
of the partition which are sets names, but denote each one by an encir
cling line (Figure 4).
We picture next two partitions a, (B distinguishing them by marking the
encircling lines of the elements of a by and of the elements of (B by
1 It is easy to show that the superposition of a, (B is a subpartition of both a and (B
and that every partition C which is a subpartition of both a and (B is also one of their
superposition. Hence the name. Cf . O. Birkhoff, loc. cit. Chapt. III.
SETS AND PARTITIONS
65
 (Figure 5). In this figure a is a subpartition of (B. In the following
one neither a is a subpartition (B nor is & one of a (Figure 6). We leave it
to the reader to determine the superposition of Ot, (B in this figure.
Figure 6.
Figure 7.
Figure 8.
Figure 9.
Another, more schematic, representation of partitions obtains by repre
senting the set 8 by one dot, and every element of the partition which is a
66 DESCRIPTION OF GAMES OF STRATEGY
subset of 12 by a line going upward from this dot. Thus the partition a of
Figure 5 will be represented by a much simpler drawing (Figure 7). This
representation does not indicate the elements within the elements of the
partition, and it cannot be used to represent several partitions in 12 simul
taneously, as was done in Figure 6. However, this deficiency can be
removed if the two partitions ft, (B in 12 are related as in Figure 5 : If Q is a
subpartition of (B. In this case we can represent 12 again by a dot at the
bottom, every element of (B by a line going upward from this dot as in
Figure 7 and every element of ft as another line going further upward,
beginning at the upper end of that line of (B, which represents the element of
(B of which this element of ft is a subset. Thus we can represent the two
partitions ft, (B of Figure 5 (Figure 8). This representation is again less
revealing than the corresponding one of Figure 5. But its simplicity makes
it possible to extend it further than pictures in the vein of Figures 46 could
practically go. Specifically: We can picture by this device a sequence of
partitions fti, , ft M , where each one is a subpartition of its immediate
predecessor. We give a typical example with ju = 5 (Figure 9).
Configurations of this type have been studied in mathematics, and are
known as trees.
8.4. Logistic Interpretation of Sets and Partitions
8.4.1. The notions which we have described in 8.2.1.8.3.2. will be useful
in the discussion of games which follows, because of the logistic interpreta
tion which can be put upon them.
Let us begin with the interpretation concerning sets.
If 12 is a set of objects of any kind, then every conceivable property
which some of these objects may possess, and others not can be fully
characterized by specifying the set of those elements of 12 which have this
property. I.e. if two properties correspond in this sense to the same set
(the same subset of 12), then the same elements of 12 will possess these two
properties, i.e. they are equivalent within 12, in the sense in which this term
is understood in logic.
Now the properties (of elements of 12) are not only in this simple cor
respondence with sets (subsets of 12), but the elementary logical operations
involving properties correspond to the set operations which we discussed in
8.2.2.
Thus the disjunction of two properties i.e. the assertion that at least
one of them holds corresponds obviously to forming the sum of their sets,
the operation A u B. The conjunction of two properties i.e. the assertion
that both hold corresponds to forming the product of their sets, the oper
ation A n B. And finally, the negation of a property i.e. the assertion
of the opposite corresponds to forming the complement of its set, the
operation A. 1
1 Concerning the connection of set theory and of formal logic cf., e.g., O. Birkhoff,
loc. cit. Chapt. VIII.
SETTHEORETICAL DESCRIPTION 67
Instead of correlating the subsets of ft to properties in ft as done above
we may equally well correlate them with all possible bodies of information
concerning an otherwise undetermined element of ft. Indeed, any such
information amounts to the assertion that this unknown element of ft
possesses a certain specified property. It is equivalently represented
by the set of all those elements of ft which possess this property; i.e. to
which the given information has narrowed the range of possibilities for
the unknown element of ft.
Observe, in particular, that the empty set corresponds to a property
which never occurs, i.e. to an absurd information. And two disjunct sets
correspond to two incompatible properties, i.e. to two mutually exclusive
bodies of information.
8.4.2. We now turn our attention to partitions.
By reconsidering the definition (8:B:a), (8:B:b) in 8.3.1., and by restat
ing it in our present terminology, we see: A partition is a system of pairwise
mutually exclusive bodies of information concerning an unknown element
of ft none of which is absurd in itself. In other words: A partition is a
preliminary announcement which states how much information will be
given later concerning an otherwise unknown element of ft; i.e. to what
extent the range of possibilities for this element will be narrowed later. But
the actual information is not given by the partition, that would amount to
selecting an element of the partition, since such an element is a subset of ft,
i.e. actual information.
We can therefore say that a partition in ft is a pattern of information.
As to the subsets of ft: we saw in 8.4.1. that they correspond to definite
informati6n. In order to avoid confusion with the terminology used for
partitions, we shall use in this case i.e. for a subset of ft the words actual
information.
Consider now the definition (8:B:c) in 8.3.1., and relate it to our present
terminology. This expresses for two partitions d, (B in ft the meaning of ft
being a subpartition of (B: it amounts to the assertion that the information
announced by d includes all the information announced by (B (and possibly
more) ; i.e. that the pattern of information a includes the pattern of informa
tion (B.
These remarks put the significance of the Figures 49 in 8.3.2. in a new
light. It appears, in particular, that the tree of Figure 9 pictures a sequence
of continually increasing patterns of information.
9. The Settheoretical Description of a Game
9.1. The Partitions Which Describe a Game
9.1.1. We assume the number of moves as we now know that we may
to be fixed. Denote this number again by v, and the moves themselves
again by 3TCi, , SKI,.
Consider all possible plays of the game F, and form the set ft of which
they are the elements. If we use the description of the preceding sections,
68 DESCRIPTION OF GAMES OE STRATEGY
then all possible plays are simply all possible sequences <n, , *. l There
exist only a finite number of such sequences, 2 and so ft is a finite set.
There are, however, also more direct ways to form ft. We can, e.g.,
form it by describing each play as the sequence of the v + 1 consecutive
positions 8 which arise during its course. In general, of course, a given
position may not be followed by an arbitrary position, but the positions
which are possible at a given moment are restricted by the previous posi
tions, in a way which must be precisely described by the rules of the game. 4
Since our description of the rules of the game begins by forming ft, it may be
undesirable to let ft itself depend so heavily on all the details of those rules.
We observe, therefore, that there is no objection to including in ft absurd
sequences of positions as well. 6 Thus it would be perfectly acceptable even
to let ft consist of all sequences of v f 1 successive positions, without any
restrictions whatsoever.
Our subsequent descriptions will show how the really possible plays
are to be selected from this, possibly redundant, set ft.
9.1.2. v and ft being given, we enter upon the more elaborate details of
the course of a play.
Consider a definite moment during this course, say that one which
immediately precedes a given move 3TI,. At this moment the following
general specifications must be furnished by the rules of the game.
First it is necessary to describe to what extent the events which have
led up to the move 9TC. 6 have determined the course of the play. Every
particular sequence of these events narrows the set ft down to a subset A K :
this being the set of all those plays from ft, the course of which is, up to 3^,,
the particular sequence of events referred to. In the terminology of the
earlier sections, ft is as pointed out in 9.1.1. the set of all sequences
<TI, , a,} then A t would be the set of those sequences <n,  , a, for
which the <TI, , <r,_i have given numerical values (cf. footnote 6 above).
But from our present broader point of view we need only say that A K must
be a subset of ft.
Now the various possible courses the game may have taken up to 3TC,
must be represented by different sets A K . Any two such courses, if they are
different from each other, initiate two entirely disjunct sets of plays; i.e.
no play can have begun (i.e. run up to 9TC,) both ways at once. This means
that any two different sets A x must be disjunct.
1 Cf . in particular^ 6.2.2. The range of the <TI, , <r v is described in footnote 2
on p. 59.
1 Verification by means of the footnote referred to above is immediate.
1 Before 3Jli, between 9lli and 9Rj, between 9R 2 and 9flli, etc., etc., between 9TC,>_i and
3H,, after 3Hp.
4 This is similar to the development of the sequence <n, , <r,, as described in
footnote 2 on p. 59.
* I.e. ones which will ultimately be found to be disallowed by the fully formulated
rules of the game.
I.e. the choices connected with the anterior moves 9Tli,  , 9TC*.i i.e. the numeri
cal values <ri,
SETTHEORETICAL DESCRIPTION 69
Thus the complete formal possibilities of the course of all conceivable
plays of our game up to 9TC, are described by a family of pairwise disjunct
subsets of ft. This is the family of all the sets A K mentioned above. We
denote this family by a.
The sum of all sets A K contained in a, must contain all possible plays.
But since we explicitly permitted a redundancy of 12 (cf. the end of 9.1.1.),
this sum need nevertheless not be equal to ft. Summing up:
(9: A) ft is a partition in ft.
We could also say that the partition &< describes the pattern of informa
tion of a person who knows everything that happened up to 3NI,; 1 e.g. of an
umpire who supervises the course of the play. 2
9.1.3. Second, it must be known what the nature of the move 3fll is
going to be. This is expressed by the k< of 6.2.1.: k K = 1, , n if the
move is personal and belongs to the player fc,; fc, = if the move is chance.
kt may depend upon the course of the play up to 3TI,, i.e. upon the informa
tion embodied in a,. 3 This means that fc must be a constant within each
set A K of Gfc,, but that it may vary from one A K to another.
Accordingly we may form for every fc = 0, 1, , n a set B*(k), which
contains all sets A K with k K = k, the various B K (k) being disjunct. Thus the
B K (k), k = 0, 1, , n, form a family of disjunct subsets of ft. We denote
this family by (B,.
(9:B) (B, is again a partition in ft. Since every A K of ft, is a subset
of some B K (k) of (B, therefore & is a subpartition of (B*.
But while there was no occasion to specify any particular enumeration
of the sets A K of Q K , it is not so with (B,. (B* consists of exactly n + 1 sets
B K (k), k = 0, 1, , n, which in this way appear in a fixed enumeration
by means of the k = 0, 1, , n. 4 And this enumeration is essential
since it replaces the function k K (cf. footnote 3 above).
9.1.4. Third, the conditions under which the choice connected with the
move 311, is to take place must be described in detail.
Assume first that 311, is a chance move, i.e. that we are within the set
jB(0). Then the significant quantities are: the number of alternatives a
and the probabilities p*(l), , p*(ctt) of these various alternatives (cf.
the end of 6.2.1.). As was pointed out in 7.1.1. (this was the second item
1 I.e. the outcome of all choices connected with the moves 9Ri, , 9TCi. In our
earlier terminology: the values of <n, , <T*_I.
1 It is necessary to introduce such a person since, in general, no player will be in
possession of the full information embodied in a.
8 In the notations of 7.2.1., and in the sense of the preceding footnotes: k* 
fc(<ri, , <r,(_i).
4 Thus (B is really not a set and not a partition, but a more elaborate concept: it con
sists of the sets (B,c(fc), A; = 0, 1, , n, in this enumeration.
It possesses, however, the properties (8:B:a), (8:B:b) of 8.3.1., which characterize a
partition. Yet even there an exception must be made: among the sets (B(fc) there can
be empty ones.
70 DESCRIPTION OF GAMES OF STRATEGY
of the discussion there), all these quantities may depend upon the entire
information embodied in GL K (cf. footnote 3 on p. 69), since 3TI* is now a
chance move. I.e. a K and the p(l), , />(<*) must be constant within
each set A K of Q K l but they may vary from one A K to another.
Within each one of these A K the choice among the alternatives &(!),
, &(*) takes place, i.e. the choice of a a K = 1, , a K (cf. 6.2.2.).
This can be described by specifying a K disjunct subsets of A K which cor
respond to the restriction expressed by A K , plus the choice of a K which has
taken place. We call these sets C K , and their system consisting of all C K
in all the A K which are subsets of B K (0) e(0). Thus 6^(0) is a partition in
B K (0). And since every C K of G(0) is a subset of some A K of a,, therefore
C(0) is a subpartition of Q, K .
The a K are determined by (()) ; 2 hence we need not mention them any
more. For the p(l), , p(cO this description suggests itself: with
every C K of C^O) a number p K (C^) (its probability) must be associated,
subject to the equivalents of footnote 2 on p. 50. 8
9.1.5. Assume, secondly, that 9ffl is a personal move, say of the player
k = 1, , n, i.e. that we are within the set B K (k). In this case we
must specify the state of information of the player k at 311*. In 6.3.1. this
was described by means of the set A, in 7.2.1. by means of the family of
functions <, the latter description being the more general and the final one.
According to this description k knows at 3fTC the values of all functions
h(<n, , 0"ici) of $ and no more. This amount of information operates
a subdivision of B K (k) into several disjunct subsets, corresponding to the
various possible contents of fc's information at ?RI K . We call these sets
D, and their system > K (k). Thus ><(&) is a partition in B K (k).
Of course fc's information at STC* is part of the total information existing
at that moment in the sense of 9.1.2. which is embodied in ft, Hence
in an A K of &, which is a subset of B K (k), no ambiguity can exist, i.e. this
A K cannot possess common elements with mt>re than one D K of ><(&). This
means that the A K in question must be a subset of a D K of ><(&). In other
words: within B K (k) Q K is a subpartition of >(&)
In reality the course of the play is narrowed down at am, within a set
A K of GL X . But the player k whose mo've 9TI K is, does not know as much:
as far as he is concerned, the play is merely within a set D K of >(fc). He
must now make the choice among the alternatives &*(!), , d(cx)> i.e.
the choice of a <r = 1, , a,. As was pointed out in 7.1.2. and 7.2.1.
(particularly at the end of 7.2.1.), a K may well be variable, but it can only
depend upon the information embodied in ><(&). I.e. it must be a constant
within the set D K of 3D(fc) to which we have restricted ourselves. Thus
the choice of a <T K = 1, , a, can be described by specifying a disjunct
subsets of D,, which correspond to the restriction expressed by D K , plus the
1 We are within B K (Q), hence all this refers only to AJs which are subsets of #(()).
1 is the number of those C of e(0) which are subsets of the given A K .
* I.e. every p(C*) ;> 0, and for each A K , and the sum extended over all C of e,t(0)
which are subsets of A t we have Sp.tC*) = 1.
SETTHEORETICAL DESCRIPTION 71
choice of <r, which has taken place. We call these sets C, and their system
consisting of all C K in all the D K of )*(fc) <3(fc). Thus G K (k) is a partition
in B K (k). And since every C K of Q K (k) is a subset of some D K of >*(fc), there
fore e(fc) is a subpartition of >(&).
The a* are determined by e(fc); 1 hence we need not mention them
any more, a* must not be zero, i.e., given a D< of $> K (k), some C of C(fc),
which is a subset of D,, must exist. 2
9.2. Discussion of These Partitions and Their Properties
9.2.1. We have completely described in the preceding sections the
situation at the moment which precedes the move 3TC,. We proceed now to
discuss what happens as we go along these moves K = 1, ,*>. It
is convenient to add to these a K = v + 1, too, which corresponds to the
conclusion of the play, i.e. follows after the last move 311,.
For K = 1, , v we have, as we discussed in the preceding sections,
the partitions
a., (B, = (B,(0), B.(l), , B.(n)), C.(0), C.(l), , C.(n),
>,(!), , 3>.(n).
All of these, with the sole exception of &, refer to the move 3TC,, hence
they need not and cannot be defined for K = v + 1. But &+! has a per
fectly good meaning, as its discussion in 9.1.2. shows: It represents the
full information which can conceivably exist concerning a play, i.e. the
individual identity of the play. 3
At this point two remarks suggest themselves : In the sense of the above
observations Oti corresponds to a moment at which no information is
available at all. Hence Cti should consist of the one set 0. On the other
hand, Ot,+i corresponds to the possibility of actually identifying the play
which has taken place. Hence <$+! is a system of oneelement sets.
We now proceed to describe the transition from K to K + 1, when
K = 1, , v.
9.2.2. Nothing can be said about the change in the (B, e(fc), 5)(fc)
when K is replaced by K + 1, our previous discussions have shown that
when this replacement is made anything may happen to those objects, i.e.
to what they represent.
It is possible, however, to tell how Q K +i obtains from 6t.
The information embodied in &,+) obtains from that one embodied
in a by adding to it the outcome of the choice connected with the move
SHI*. 4 This ought to be clear from the discussions of 9.1.2. Thus the
1 is the number of those C K of e(fc) which are subsets of the given A*.
1 We required this for k = 1, , n only, although it must be equally true for
Jc = with an A K , subset of #(0), in place of our D K of 3)*(fc). But it is unnecessary to
state it for that case, because it is a consequence of footnote 3 on p. 70; indeed, if no
0* of the desired kind existed, the Sp(C) of loc. cit, would be and not 1.
8 In the sense of footnote 1 on p. 69, the values of all <n, , <r,. And the sequence
<TI, , <r, characterizes, as stated in 6.2.2., the play itself.
4 In our earlier terminology: the value of <r.
72 DESCRIPTION OF GAMES OF STRATEGY
information in Ot,+i which goes beyond that in ft, is precisely the information
embodied in the e(0), &(!), , C,(n).
This means that the partitions a+i obtains by superposing the partition
a. with all partitions 6(0), 6(1), , <S>*(k). I.e. by forming the inter
section of every A , in a, with every C< in any C^O), C(l), , C(n), and
then throwing away the empty sets.
Owing to the relationship of ft, and of the e(fc) to the sets B K (k) as
discussed in the preceding sections we can say a little more about this
process of superposition.
In B(0), C(0) is a subpartition of Ofc (cf. the discussion in 9.1.4.). Hence
there Q K +i simply coincides with G(0). In (&), k = 1, , n, G(fc)
and ft, are both subpartitions of )(&) (cf. the discussion in 9.1.5.). Hence
there &<+i obtains by first taking every D K of 3)(fc), then for every such D,
all A K of & x and all C of C,(fc) which are subsets of this D K , and forming all
intersections A K n C K .
Every such set A K n C represents those plays which arise when the
player fc, with the information of D before him, but in a situation which is
really in A K (a subset of D), makes the choice C K at the move 9TC, so as to
restrict things to C.
Since this choice, according to what was said before, is a possible one,
there exist such plays. I.e. the set A K n C K must not be empty. We
restate this:
(9:C) If A K of Ot and C of C(fc) are subsets of the same D K of >(&),
then the intersection A K n C must not be empty.
9.2.3. There are games in which one might be tempted to set this require
ment aside. These are games in which a player may make a legitimate
choice which turns out subsequently to be a forbidden one; e.g. the double
blind Chess referred to in footnote 1 on p. 58: here a player can make an
apparently possible choice ("move") on his own board, and will (possibly)
be told only afterwards by the "umpire" that it is an "impossible" one.
This example is, however, spurious. The move in question is best
resolved into a sequence of several alternative ones. It seems best to give
the contemplated rules of doubleblind Chess in full.
The game consists of a sequence of moves. At each move the "umpire "
announces to both players whether the preceding move was a "possible"
one. If it was not, the next move is a personal move of the same player
as the preceding one; if it was, then the next move is the other player's
personal move. At each move the player is informed about all of his own
anterior choices, about the entire sequence of "possibility" or "impossibil
ity" of all anterior choices of both players, and about all anterior instances
where either player threatened check or took anything. But he knows
the identity of his own losses only. In determining the course of the game,
the "umpire" disregards the "impossible" moves. Otherwise the game is
played like Chess, with a stop rule in the sense of footnote 3 on p. 59,
AXIOMATIC FORMULATION 73
amplified by the further requirement that no player may make ("try")
the same choice twice in any one uninterrupted sequence of his own personal
moves. (In practice, of course, the players need two chessboards out of
each other's view but both in the " umpire's " view to obtain these condi
tions of information.)
At any rate we shall adhere to the requirement stated above. It will
be seen that it is very convenient for our subsequent discussion (cf. 11.2.1.).
9.2.4. Only one thing remains: to reintroduce in our new terminology,
the quantities SF*, k = 1, , n, of 6.2.2. SF* is the outcome of the play
for the player k. $ k must be a function of the actual play which has taken
place. 1 If we use the symbol ir to indicate that play, then we may say:
$k is a function of a variable IT with the domain of variability 12. I.e. :
$k = ^W, v in 12, k = 1, , n.
10. Axiomatic Formulation
10.1. The Axioms and Their Interpretations
10.1.1. Our description of the general concept of a game, with the new
technique involving the use of sets and of partitions, is now complete.
All constructions and definitions have been sufficiently explained in the
past sections, and we can therefore proceed to a rigorous axiomatic definition
of a game. This is, of course, only a concise restatement of the things
which we discussed more broadly in the preceding sections.
We give first the precise definition, without any commentary: 2
An nperson game F, i.e. the complete system of its rules, is determined
by the specification of the following data:
(10:A:a) A number v.
(10:A:b) A finite set 12.
(10:A:c) For every k = 1, , n: A function
3k = $*(*), TT in 12.
(10:A:d) For every K = 1, , v, v + 1: A partition & K in 12.
(10:A:e) For every K = 1, , v: A partition (B, in 12. (B con
sists of n + 1 sets B K (k), k = 0, 1, , n, enumerated in
this way.
(10:A:f) For every K = 1, , v and every k = 0, 1, , n:
A partition e(fc) in B K (k).
(10:A:g) For every K = 1, , v and every k = 1, , n: A
partition >(/;) in J3,(fc).
(10:A:h) For every * = 1, , v and every C* of C(0): A number
These entities must satisfy the following requirements:
(10:1 :a) ft, is a subpartition of (B,.
(10:1 :b) e(0) is a subpartition of a.
1 In the old terminology, accordingly, we had ff*  g*(ai, , o>). Cf. 6.2.2.
2 For "explanations" cf. the end of 10.1.1. and the discussion of 10.1.2.
74 DESCRIPTION OF GAMES OF STRATEGY
(10:1 :c) For k = 1, , n: C,(fc) is a subpartition of ><(fc).
(10:1 :d) For k = 1, , n: Within B<(fc), 6t is a subpartition of
ft(*).
(10:1 :e) For every K = 1, , ? and every .A* of Ot which is a
subset of 5(0): For all C K of C(0) which are subsets of this
<4, P*(C*) ^ 0, and for the sum extended over them Sp^C,) = 1.
(10:1 :f) Cti consists of the one set Q.
(10:1 :g) Ot.,+1 consists of oneelement sets.
(10:l:h) For K = 1, , v. Q K +i obtains from Q K by superposing it
with all e,(*), k = 0, 1, , n. (For details, cf. 9.2.2.)
(10:1 :i) For K = 1, , v: If A K of a, and C K of e*(fc), fc = 1,
, n are subsets of the same D K of >*(fc), then the inter
section A K n C, must not be empty.
(10:1 :j) For K = 1, , v and fc = 1, , n and every D K of
>,(&) : Some C,(fc) of e,, which is a subset of D K , must exist.
This definition should be viewed primarily in the spirit of the modern
axiomatic method. We have even avoided giving names to the mathe
matical concepts introduced in (10:A:a)(10:A:h) above, in order to estab
lish no correlation with any meaning which the verbal associations of names
may suggest. In this absolute "purity" these concepts can then be the
objects of an exact mathematical investigation. 1
This procedure is best suited to develop sharply defined concepts.
The application to intuitively given subjects follows afterwards, when
the exact analysis has been completed. Cf. also what was said in 4.1.3.
in Chapter I about the role of models in physics: The axiomatic models
for intuitive systems are analogous to the mathematical models for (equally
intuitive) physical systems.
Once this is understood, however, there can be no harm in recalling
that this axiomatic definition was distilled out of the detailed empirical
discussions of the sections, which precede it. And it will facilitate its use,
and make its structure more easily understood, if we give the intervening
concepts appropriate names, which indicate, as much as possible, the
intuitive background. And it is further useful to express, in the same
spirit, the "meaning" of our postulates (10:l:a)(10:l:j) i.e. the intuitive
considerations from which they sprang.
All this will be, of course, merely a concise summary of the intuitive con
siderations of the preceding sections, which lead up to this axiomatization.
10.1.2. We state first the technical names for the concepts of (10:A:a)
(10:A:h) in 10.1.1.
1 This is analogous to the present attitude in axiomatizing such subjects as logic,
geometry, etc. Thus, when axiomatizing geometry, it is customary to state that the
notions of points, lines, and planes are not to be a priori identified with anything intui
tive, they are only notations for things about which only the properties expressed in
the axioms are assumed. Cf., e.g., D. Hilbert: Die Grundlagen der Geometrie, Leipzig
1899, 2rd Engl. Edition Chicago 1910.
AXIOMATIC FORMULATION 75
(10:A:a*) v is the length of the game F.
(10:A:b*) ft is the set of all plays of T.
(10:A:c*) ^(TT) is the outcome of the play w for the player fc.
(10:A:d*) Ct is the umpire's pattern of information, an A of ft, is the
umpire's actual information at (i.e. immediately preceding) the
move 3TC. (For K = v + 1 : At the end of the game.)
(10:A:e*) (B* is the pattern of assignment, a B*(k) of (B, is the actual
assignment, of the move 9TC,.
(10:A:f*) <3*(fc) is the pattern of choice, a C, of 6<(fc) is the actual
choice, of the player & at the move 3TZ*. (For A; = 0: Of
chance.)
(10:A:g*) 3)(A;) is the player k's pattern of information, a D K of >*(&)
the player k's actual information, at the move 3TC.
(10:A:h*) p*(C*) is the probability of the actual choice C at the
(chance) move 3TI,.
We now formulate the "meaning" of the requirements (10:1 :a)
(10:1 :j) in the sense of the concluding discussion of 10.1.1 with the use of
the above nomenclature.
(10:l:a*) The umpire's pattern of information at the move 3TC
includes the assignment of that move.
(10:l:b*) The pattern of choice at a chance move 9fTl includes the
umpire's pattern of information at that move.
(10 :1 :c*) The pattern of choice at a personal move 3TC, of the player k
includes the player k's pattern of information at that move.
(10:l:d*) The umpire's pattern of information at the move 9Tl
includes to the extent to which this is a personal move of the
player k the player k's pattern of information at that move.
(10:l:e*) The probabilities of the various alternative choices at a
chance move 9TC< behave like probabilities belonging to disjunct
but exhaustive alternatives.
(10:l:f*) The umpire's pattern of information at the first move is
void.
(10:1 :g*) The umpire's pattern of information at the end of the game
determines the play fully.
(10:l:h*) The umpire's pattern of information at the move ST^+i
(for K = v: at the end of the game) obtains from that one at
the move SfTl, by superposing it with the pattern of choice at
the move 311,.
(10:l:i*) Let a move 2(11, be given, which is a personal move of the
player k, and any actual information of the player k at that
move also be given. Then any actual information of the
umpire at that move and any actual choice of the player k at
that move, which are both within (i.e. refinements of) this
actual (player's) information, are also compatible with each
Othe r T ** +.VIPV nnniir in flftiifll nlnvs
76 DESCRIPTION OF GAMES OF STRATEGY
(10:1 :j*) Let a move SHI, be given, which is a personal move of the
player A;, and any actual information of the player k at that
move also be given. Then the number of alternative actual
choices, available to the player k, is not zero.
This concludes our formalization of the general scheme of a game.
10.2. Logistic Discussion of the Axioms
10.2. We have not yet discussed those questions which are convention
ally associated in formal logics with every axiomatization: freedom from
contradiction, categoricity (completeness), and independence of the axioms. 1
Our system possesses the first and the lastmentioned properties, but not the
second one. These facts are easy to verify, and it is not difficult to see
that the situation is exactly what it should be. In summa:
Freedom from contradiction: There can be no doubt as to the existence
of games, and we did nothing but give an exact formalism for them. We
shall discuss the formalization of several games later in detail, cf. e.g. the
examples of 18., 19. From the strictly mathematical logistic point of
view, even the simplest game can be used to establish the fact of freedom
from contradiction. But our real interest lies, of course, with the more
involved games, which are the really interesting ones. 2
Categoricity (completeness): This is not the case, since there exist
many different games which fulfill these axioms. Concerning effective
examples, cf. the preceding reference.
The reader will observe that categoricity is not intended in this case,
since our axioms have to define a class of entities (games) and not a unique
entity. 8
Independence: This is easy to establish, but we do not enter upon it.
10.3. General Remarks Concerning the Axioms
10.3. There are two more remarks which ought to be made in connection
with this axiomatization.
First, our procedure follows the classical lines of obtaining an exact
formulation for intuitively empirically given ideas. The notion of a
game exists in general experience in a practically satisfactory form, which is
nevertheless too loose to be fit for exact treatment. The reader who has
followed our analysis will have observed how this imprecision was gradually
1 Cf. D. Hilbert, loc. cit.; 0. Veblen & J. W. Young: Projective Geometry, New York
1910; H. Weyl: Philosophic der Mathematik und Naturwissenschaften, in Handbuch der
Philosophic, Munich, 1927.
1 This is the simplest game: v 0, 12 has only one element, sayiro. Consequently
no (B, C(fc), 5>(fc), exist, while the only O is di, consisting of 12 alone. Define $(iro)
for k = I,  , n. An obvious description of this game consists in the statement that
nobody does anything and that nothing happens. This also indicates that the freedom
from contradiction is not in this case an interesting question,
8 This is an important distinction in the general logistic approach to axiomatization.
Thus the axioms of Euclidean geometry describe a unique object while those of group
theory (in mathematics) or of rational mechanics (in physics) do not, since there exist
many different groups and many different mechanical systems.
AXIOMATIC FORMULATION 77
removed, the "zone of twilight " successively reduced, and a precise formula
tion obtained eventually.
Second, it is hoped that this may serve as an example of the truth of a
much disputed proposition: That it is possible to describe and discuss
mathematically human actions in which the main emphasis lies on the
psychological side. In the present case the psychological element was
brought in by the necessity of analyzing decisions, the information on the
basis of which they are taken, and the interrelatedness of such sets of
information (at the various moves) with each other. This interrelatedness
originates in the connection of the various sets of information in time,
causation, and by the speculative hypotheses of the players concerning
each other.
There are of course many and most important aspects of psychology
which we have never touched upon, but the fact remains that a primarily
psychological group of phenomena has been axiomatized.
10.4. Graphical Representation
10.4.1. The graphical representation of the numerous partitions which
we had to use to represent a game is not easy. We shall not attempt to
treat this matter systematically: even relatively simple games seem to
lead to complicated and confusing diagrams, and so the usual advantages of
graphical representation do not obtain.
There are, however, some restricted possibilities of graphical representa
tion, and we shall say a few words about these.
In the first place it is clear from (10:1 :h) in 10.1.1., (or equally by
(10:1 :h*) in 10.1.2., i.e. by remembering the "meaning, ") that a,+i is a
subpartition of ft,. I.e. in the sequence of partitions Cti, , ft,, ft,+i
each one is a subpartition of its immediate predecessor. Consequently this
much can be pictured with the devices of Figure 9 in 8.3.2., i.e. by a tree.
(Figure 9 is not characteristic in one way: since the length of the game F is
assumed to be fixed, all branches of the tree must continue to its full height.
Cf. Figure 10 in 10.4.2. below.) We shall not attempt to add the ,(&),
Cic(fc), 3D(fc) to this picture.
There is, however, a class of games where the sequence Cti, , ft,,ft,+i
tells practically the entire story. This is the important class already
discussed in 6.4.1., and about which more will be said in 15. where
preliminarity and anteriority are equivalent. Its characteristics find a
simple expression in our present formalism.
10.4.2. Preliminarity and anteriority are equivalent as the discussions
of 6.4.1., 6.4.2. and the interpretation of 6.4.3. show if and only if every
player who makes a personal move knows at that moment the entire anterior
history of the play. Let the player be fc, the move 9TC,. The assertion
that 9TI, is k'a personal move means, then, that we are within B K (k). Hence
the assertion is that within B K (k) the player fc's pattern of information
coincides with the umpire's pattern of information; i.e. that 2D(fc) is equal to
78
DESCRIPTION OF GAMES OF STRATEGY
Q, K within B K (k). But >(&) is a partition in B K (k); hence the above state
ment means that >,(&) simply is that part of a* which lies in B K (k).
We restate this:
(10:B) Preliminarity and anteriority coincide i.e. every player who
makes a personal move is at that moment fully informed about
the entire anterior history of the play if and only if > K (k) is
that part of (t K which lies in B K (k).
If this is the case, then we can argue on as follows: By (10:1 :c) in 10.1.1.
and the above, Q K (k) must now be a subpartition of Q K . This holds for
personal moves, i.e. for k = 1, , n, but for k = it follows immedi
Figure 10.
ately from (10:1 :b) in 10.1.1. Now (10:1 :h) in 10.1.1. permits the inference
from this (for details cf. 9.2.2.) that a+i coincides with Q K (k) in B K (k) for
all k = 0, 1, , n. (We could equally have used the corresponding
points in 10.1.2., i.e. the " meaning " of these concepts. We leave the verbal
expression of the argument to the reader.) But Q K (k) is a partition in B K (k) ;
hence the above statement means that C<(k) simply is that part of G^+i
which lies in B^k).
We restate this:
(10:C) If the condition of (10:B) is fulfilled, then Q K (k) is that part
of ft*+i which lies in B^k).
Thus when preliminarity and anteriority coincide, then in our present
formalism the sequence tti, , a,,a,+i and the sets B x (k), k = 0, 1,
, n, for each K = 1, , v, describe the game fully. I.e. the picture
STRATEGIES AND FINAL SIMPLIFICATION 79
of Figure 9 in 8.3.2. must be amplified only by bracketing together those
elements of each &, which belong to the same set (& K (k). (Cf. however, the
remark made in 10.4.1.) We can do this by encircling them with a line,
across which the number k of B K (k) is written. Such B K (k) as are empty can
be omitted. We give an example of this for v = 5 and n = 3 (Figure 10).
In many games of this class even this extra device is not necessary,
because for every K only one B K (k) is not empty. I.e. the character of each
move 311, is independent of the previous course of the play. 1 Then it
suffices to indicate at each Ot, the character of the move 3fTC* i.e. the unique
k = 0, 1, , n for which B K (k) j .
11. Strategies and the Final Simplification of the Description of a Game
11.1. The Concept of a Strategy and Its Formalization
11.1.1. Let us return to the course of an actual play TT of the game F.
The moves SfTC* follow each other in the order K = 1, , v. At each
move 9Tl a choice is made, either by chance if the play is in B<(0) or by a
player k = 1, , n if the play is in B K (k). The choice consists in the
selection of a C K from Q K (k) (k = or k = 1, , n, cf. above), to which
the play is then restricted. If the choice is made by a player fc, then pre
cautions must be taken that this player's pattern of information should be
at this moment >*(&), as required. (That this can be a matter of some
practical difficulty is shown by such examples as Bridge [cf. the end of
6.4.2.] and doubleblind Chess [cf. 9.2.3.].)
Imagine now that each player k = 1, , n, instead of making each
decision as the necessity for it arises, makes up his mind in advance for all
possible contingencies; i.e. that the player k begins to play with a complete
plan: $ plan which specifies what choices he will make in every possible situa
tion, for every possible actual information which he may possess at that
moment in conformity with the pattern of information which the rules of
the game provide for him for that case. We call such a plan a strategy.
Observe that if we require each player to start the game with a complete
plan of this kind, i.e. with a strategy, we by no means restrict his freedom
of action. In particular, we do not thereby force him to make decisions
on the basis of less information than there would be available for him in each
practical instance in an actual play. This is because the strategy is sup
posed to specify every particular decision only as a function of just that
amount of actual information which would be available for this purpose in
an actual play. The only extra burden our assumption puts on the player
is the intellectual one to be prepared with a rule of behavior for all even
tualities, although he is to go through one play only. But this is an innoc
uous assumption within the confines of a mathematical analysis. (Cf.
also 4.1.2.)
1 This is true for Chess. The rules of Backgammon permit interpretations both
80 DESCRIPTION OF GAMES OF STRATEGY
11.1.2. The chance component of the game can be treated in the same
way.
It is indeed obvious that it is not necessary to make the choices which are
left to chance, i.e. those of the chance moves, only when those moves come
along. An umpire could make them all in advance, and disclose their
outcome to the players at the various moments and to the varying extent,
as the rules of the game provide about their information.
It is true that the umpire cannot know in advance which moves will be
chance ones, and with what probabilities; this will in general depend upon
the actual course of the play. But as in the strategies which we considered
above he could provide for all contingencies: He could decide in advance
what the outcome of the choice in every possible chance move should be, for
every possible anterior course of the play, i.e. for every possible actual
umpire's information at the move in question. Under these conditions the
probabilities prescribed by the rules of the game for each one of the above
instances would be fully determined and so the umpire could arrange for
each one of the necessary choices to be effected by chance, with the appro
priate probabilities.
The outcomes could then be disclosed by the umpire to the players at
the proper moments and to the proper extent as described above.
We call such a preliminary decision of the choices of all conceivable
chance moves an umpire's choice.
We saw in the last section that the replacement of the choices of all
personal moves of the player k by the strategy of the player k is legitimate ;
i.e. that it does not modify the fundamental character of the game F.
Clearly our present replacement of the choices of all chance moves by the
umpire's choice is legitimate in the same sense.
11.1.3. It remains for us to formalize the concepts of a strategy and of
the umpire's choice. The qualitative discussion of the two last sections
makes this an unambiguous task.
A strategy of the player k does this: Consider a move 3fTC,. Assume that
it has turned out to be a personal move of the player &, i.e. assume that
the play is within B(fc). Consider a possible actual information of the
player k at that moment, i.e. consider a D* of ><(&). Then the strategy
in question must determine his choice at this juncture, i.e. a C of Q K (k)
which is a subset of the above D K .
Formalized:
(11:A) A strategy of the player A; is a function 2*(/c; D K ) which is
defined for every * = !,,*/ and every D K of >, (k), and
whose value
2*(*; D,) = C,
has always these properties: C K belongs to 6(fc) and is a subset
ofD,.
That strategies i.e. functions S*(K; D<) fulfilling the above requirement
exist at all, coincides precisely with our postulate (10:1 :j) in 10.1.1.
STRATEGIES AND FINAL SIMPLIFICATION 81
An umpire's choice does this:
Consider a move 9TC. Assume that it has turned out to be a chance
move, i.e. assume that the play is within J3(0). Consider a possible
actual information of the umpire at this moment; i.e. consider an A K of ft,
which is a subset of B(0). Then the umpire's choice in question must
determine the chance choice at this juncture, i.e. a C of <3(0) which is a
subset of the above A K .
Formalized :
(11 :B) An umpire's choice is a function 2o(*; A K ) which is defined for
every K = 1, , v and every A K of QL K which is a subset of
B K (0) and whose value
So(*; A K ) = C,
has always these properties: C belongs to <B(0) and is a subset
of A K .
Concerning the existence of umpire's choices i.e. of functions 2 (*; ^)
fulfilling the above requirement cf. the remark after (11:A) above, and
footnote 2 on p. 71.
Since the outcome of the umpire's choice depends on chance, the cor
responding probabilities must be specified. Now the umpire's choice is an
aggregate of independent chance events. There is such an event, as
described in 11.1.2., for every K = 1, , v and every A K of G, which is a
subset of J3(0). I.e. for every pair *, A< in the domain of definition of
SO(K; At). As far as this event is concerned the probability of the particular
outcome 2o(*; A K ) = C K is p*(C^). Hence the probability of the entire
umpire's choice, represented by the function 2 (*; A K ) is the product of the
individual probabilities p K (C K ). 1
Formalized :
(11:C) The probability of the umpire's choice, represented by the
function o(*; A K ) is the product of the probabilities p(C),
where 2 (*; A K ) = C, and K, A K run over the entire domain of
definition of 2o(*; A K ) (cf. (11 :B) above).
If we consider the conditions of (10:l:e) in 10.1.1. for all these pairs
K, A K , and multiply them all with each other, then these facts result: The
probabilities of (11:C) above are all ^ 0, and their sum (extended over all
umpire's choices) is one. This is as it should be, since the totality of all
umpire's choices is a system of disjunct but exhaustive alternatives.
ll.). The Final Simplification of the Description of a Game
11.2.1. If a definite strategy has been adopted by each player k = 1,
, n, and if a definite umpire's choice has been selected, then these
determine the entire course of the play uniquely, and accordingly its
1 The chance events in question must be treated as independent.
82 DESCRIPTION OF GAMES OF STRATEGY
outcome too, for each player k = 1, , n. This should be clear from
the verbal description of all these concepts, but an equally simple formal
proof can be given.
Denote the strategies in question by 2*(/c; D K ), k = 1, , n, and the
umpire's choice by 2 (/c; A K ). We shall determine the umpire's actual
information at all moments K = 1, , v } v + 1. In order to avoid
confusing it with the above variable A^, we denote it by A K .
A i is, of course, equal to 12 itself. (Cf. (10:1 :f) in 10.1.1.)
Consider now a K = 1, , v, and assume that the corresponding A K
is already known. Then A K is a subset of precisely one B K (k), k = 0, 1,
 , n. (Cf. (10:1 :a) in 10.1.1.) If k = 0, then 3TC, is a chance move, and
so the outcome of the choice is 2o(*; A K ). Accordingly A f +i = 2o(*; A K ).
(Cf. (10:1 :h) in 10.1.1. and the details in 9.2.2.) If k = 1, , n, then
3TC, is a personal move of the player k. A K is a subset of precisely one D K of
>,(fc). (Cf. (10:1 :d) in 10.1.1.) So the outcome of the choice is Z*(K; D K ).
Accordingly A f +i = A K n S*(K; D K ). (Cf. (10:1 :h) in 10.1.1. and the details
in 9.2.2.)
Thus we determine inductively A\, A^ A^  , A vy A v +i in succession.
But A 9 +i is a oneelement set (cf. (10:1 :g) in 10.1.1.); denote its unique
element by if.
This TT is the actual play which took place. 1 Consequently the outcome
of the play is *(*) for the player k = 1, , n.
11.2.2. The fact that the strategies of all players and the umpire's
choice determine together the actual play and so its outcome for each
player opens up the possibility of a new and much simpler description of
the game F.
Consider a given player k = 1, , n. Form all possible strategies
of his, 2*(*; D K ), or for short 2*. While their number is enormous,
it is obviously finite. Denote it by Pk, and the strategies themselves by
ZJ, ' , 2&
Form similarly all possible umpire's choices, 2 (*; A K ), or for short 2 .
Again their number is finite. Denote it by /3o, and the umpire's choices by
2j, , 2?o. Denote their probabilities by p 1 , , p^o respectively.
(Cf. (11 :C) in 11.1.3.) All these probabilities are ^ and their sum is one.
(Cf. the end of 11.1.3.)
A definite choice of all strategies and of the umpire's choices, say 2J* for
k = 1, , n and for k = respectively, where
T* = 1, , /fo for k = 0, 1, , n,
determines the play TT (cf. the end of 11.2.1.), and its outcome ^(TT) for
each player k = 1, , n. Write accordingly
(11:1) IF*(T) = S*(T O , TI, , r n ) for k = 1, , n.
1 The above inductive derivation of the Ai, ^2, ^j, , Av, A+i is just a mathemat
ical reproduction of the actual course of the play. The reader should verify the parallel
ism of the steps involved.
STRATEGIES AND FINAL SIMPLIFICATION 83
The entire play now consists of each player k choosing a strategy SJ,
i.e. a number 7 fc =!, , 0*; and of the chance umpire's choice of T O = 1,
, 0o, with the probabilities p\ , /A respectively.
The player k must choose his strategy, i.e. his r*, without any information
concerning the choices of the other players, or of the chance events (the
umpire's choice). This must be so since all the information he can at any
time possess is already embodied in his strategy 2* = SJ* i.e. in the function
2jk = *(*; D K ). (Cf. the discussion of 11.1.1.) Even if he holds definite
views as to what the strategies of the other players are likely to be, they
must be already contained in the function 2* (K; D).
11.2.3. All this means, however, that F has been brought back to the
very simplest description, within the least complicated original framework of
the sections 6.2.1. 6.3.1. We have n + I moves, one chance and one
personal for each player k = 1, , n each move has a fixed number of
alternatives, /3 for the chance move and 0i, , ft for the personal ones
and every player has to make this choice with absolutely no information
concerning the outcome of all other choices. 1
Now we can get rid even of the chance move. If the choices of the
players have taken place, the player k having chosen T*, then the total
influence of the chance move is this : The outcome of the play for the player k
may be any one of the numbers
9*( T 0, TI, , T n ), 7 = 1, ' ' ' , 00,
with the probabilities p 1 , , yA respectively. Consequently his
" mathematical expectation " of the outcome is
(11:2) JC*(TI, ,r n ) = P T .S*(T O , T lf , r n ).
r l
The player's judgment must be directed solely by this " mathematical
expectation/' because the various moves, and in particular the chance
move, are completely isolated from each other. 2 Thus the only moves
which matter are the n personal moves of the players k = 1, n.
The final formulation is therefore this :
(11:D) The n person game T, i.e. the complete system of its rules, is
determined by the specification of the following data:
(ll:D:a) For every k = !, , n: A number /3*.
(ll:D:b) For every k = 1, , n: A function
3C* = OC*(ri, , r n ),
r, = 1, , ft for j = 1,  , n.
1 Owing to this complete disconnectedness of the n f 1 moves, it does not matter
in what chronological order they are placed.
2 We are entitled to use the unmodified "mathematical expectation" since we are
satisfied with the simplified concept of utility, as stressed at the end of 5.2.2 This
excludes in particular all those more elaborate concepts of "expectation," which are
really attempts at improving that naive concept of utility. (E.g. D. Bernoulli's "moral
expectation" in the "St. Petersburg Paradox.")
84 DESCRIPTION OF GAMES OF STRATEGY
The course of a play of T is this:
Each player k chooses a number r = 1, ,/}. Each
player must make his choice in absolute ignorance of the choices
of the others. After all choices have been made, they are
submitted to an umpire who determines that the outcome of the
play for the player k is 3C*(ri, , r n ).
11.3. The Role of Strategies in the Simplified Form of a Game
11.3. Observe that in this scheme no space is left for any kind of further
11 strategy." Each player has one move, and one move only; and he must
make it in absolute ignorance of everything else. 1 This complete crystal
lization of the problem in this rigid and final form was achieved by our
manipulations of the sections from 11.1.1. on, in which the transition from
the original moves to strategies was effected. Since we now treat these
strategies themselves as moves, there is no need for strategies of a higher
order.
11.4. The Meaning of the Zerosum Restriction
11.4. We conclude these considerations by determining the place of the
zerosum games (cf. 5.2.1.) within our final scheme.
That T is a zerosum game means, in the notation of 10.1.1., this:
n
(11:3) J 5k(*) = for all w of fl.
Jbl
If we pass from ^(TT) to 8* (TO, r,, , r n ), in the sense of 11.2.2., then this
becomes
n
(11:4) 9*(T ,Ti, , r n ) =0 for all TO, TI, ,r n .
*i
And if we finally introduce JC*(TI, , r), in the sense of 1 1.2.3., we obtain
n
(11:5) 5) OC*(ri, , r n ) = for all n, , r n .
*i
Conversely, it is clear that the condition (11 :5) makes the game F, which we
defined in 11.2.3., one of zero sum.
1 Reverting to the definition of a strategy as given in 11.1.1.: In this game a player A;
has one and only one personal move, and this independently of the course of the play,
the move 311*. And he must make his choice at Sdl* with nil information. So his
strategy is simply a definite choice for the move 9R*, no more and no less; i.e. precisely
r  1, , 0*.
We leave it to the reader to describe this game in terms of partitions, and to compare
the above with the formalistic definition of a strategy in (11 : A) in 11.1.3.
CHAPTER III
ZEROSUM TWOPERSON GAMES: THEORY
12. Preliminary Survey
12.1. General Viewpoints
12.1.1. In the preceding chapter we obtained an allinclusive formal
characterization of the general game of n persons (cf. 10.1.). We followed
up by developing an exact concept of strategy which permitted us to replace
the rather complicated general scheme of a game by a much more simple
special one, which was nevertheless shown to be fully equivalent to the
former (cf. 11.2.). In the discussion which follows it will sometimes be
more convenient to use one form, sometimes the other. It is therefore
desirable to give them specific technical names. We will accordingly call
them the extensive and the normalized form of the game, respectively.
Since these two forms are strictly equivalent, it is entirely within our
province to use in each particular case whichever is technically more con
venient at that moment. We propose, indeed, to make full use of this
possibility, and must therefore reemphasize that this does not in the least
affect the absolutely general validity of all our considerations.
Actually the normalized form is better suited for the derivation of
general theorems, while the extensive form is preferable for the analysis of
special cases; i.e., the former can be used advantageously to establish pro
perties which are common to all games, while the latter brings out charac
teristic differences of games and the decisive structural features which
determine these differences. (Cf. for the former 14., 17., and for the latter
e.g. 15.)
12.1.2. Since the formal description of all games has been completed,
we must now turn to build up a positive theory. It is to be expected that
a systematic procedure to this end will have to proceed from simpler games
to more complicated games. It is therefore desirable to establish an
ordering for all games according to their increasing degree of complication.
We have already classified games according to the number of partici
pants a game with n participants being called an nperson game and
also according to whether they are or are not of zerosum. Thus we must
distinguish zerosum nperson games and general nperson games. It will
be seen later that the general nperson game is very closely related to the
zerosum (n + l)person game, in fact the theory of the former will
obtain as a special case of the theory of the latter. (Cf. 56.2.2.)
12.2. The Oneperson Game
12.2.1. We begin with some remarks concerning the oneperson game.
In the normalized form this game consists of the choice of a number
85
86 ZEROSUM TWOPERSON GAMES: THEORY
r = 1, , 0, after which the (only) player 1 gets the amount 3C(r). 1 The
zerosum case is obviously void 2 and there is nothing to say concerning it.
The general case corresponds to a general function 3C(r) and the "best"
or " rational" way of acting i.e. of playing consists obviously of this:
The player 1 will choose r = 1, , so as to make JC(r) a maximum.
This extreme simplification of the oneperson game is, of course, due
to the fact that our variable r represents not a choice (in a move) but
the player's strategy; i.e., it expresses his entire "theory" concerning the
handling of all conceivable situations which may occur in the course of the
play. It should be remembered that even a oneperson game can be of a
very complicated pattern: It may contain chance moves as well as personal
moves (of the only player), each one possibly with numerous alternatives,
and the amount of information available to the player at any particular
personal move may vary in any prescribed way.
12.2.2. Numerous good examples of many complications and subtleties
which may arise in this way are given by the various games of "Patience"
or "Solitaire." There is, however, an important possibility for which, to
the best of our knowledge, examples are lacking among the customary
oneperson games. This is the case of incomplete information, i.e. of non
equivalence of anteriority and preliminarity of personal moves of the
unique player (cf. 6.4.). For such an absence of equivalence it would be
necessary that the player have two personal moves 311* and 3n\ at neither
of which he is informed about the outcome of the choice of the other.
Such a state of lack of information is not easy to achieve, but we discussed
in 6.4.2. how it can be brought about by "splitting" the player into two
or more persons of identical interest and with imperfect communications.
We saw loc. cit. that Bridge is an example of this in a twoperson game;
it would be easy to construct an analogous oneperson game but unluckily
the known forms of "solitaire" are not such. 8
This possibility is nevertheless a practical one for certain economic
setups: A rigidly established communistic society, in which the structure
of the distribution scheme is beyond dispute (i.e. where there is no exchange,
but only one unalterable imputation) would be such since the interests
of all the members of such a society are strictly identical 4 this setup must be
treated as a oneperson game. But owing to the conceivable imperfections
of communications among the members, all sorts of incomplete information
can occur.
This is then the case which, by a consistent use of the concept of strategy
(i.e. of planning), is naturally reduced to a simple maximum problem. On
the basis of our previous discussions it will therefore be apparent now
Cf. (ll:D:a), (ll:D:b) at the end of 11.2.3. We suppress the index 1 .
8 Then3C(r) = 0, cf. 11.4.
8 The existing "double solitaires " are competitive games between the two partici
pants, i.e. twoperson games.
4 The individual members themselves cannot be considered as players, since all
possibilities of conflict among them, as well as coalitions of some of them against the
others, are excluded.
PRELIMINARY SURVEY 87
that this and this only is the case in which the simple maximum formu
lation i.e. the "Robinson Crusoe" form of economics is appropriate.
12.2.3. These considerations show also the limitations of the pure
maximum i.e. the "Robinson Crusoe " approach. The above example
of a society of a rigidly established and unquestioned distribution scheme
shows that on this plane a rational and critical appraisal of the distribution
scheme itself is impossible. In order to get a maximum problem it was
necessary to place the entire scheme of distribution among the rules of the
game, which are absolute, inviolable and above criticism. In order to
bring them into the sphere of combat and competition i.e. the strategy
of the game it is necessary to consider nperson games with n ^ 2 and
thereby to sacrifice the simple maximum aspect of the problem.
12.3. Chance and Probability
12.3. Before going further, we wish to mention that the extensive
literature of "mathematical games " which was developed mainly in
the 18th and 19th centuries deals essentially only with an aspect of the
matter which we have already left behind. This is the appraisal of the
influence of chance. This was, of course, effected by the discovery and
appropriate application of the calculus of probability and particularly
of the concept of mathematical expectations. In our discussions, the
operations necessary for this purpose were performed in 11.2.3. 1 * 2
Consequently we are no longer interested in these games, where the
mathematical problem consists only in evaluating the role of chance i.e.
in computing probabilities and mathematical expectations. Such games
lead occasionally to interesting exercises in probability theory; 8 but we
hope that the reader will agree with us that they do not belong in the
theory of games proper.
12.4. The Next Objective
12.4. We now proceed to the analysis of more complicated games.
The general oneperson game having been disposed of, the simplest one
of the remaining games is the zerosum twoperson game. Accordingly
we are going to discuss it first.
Afterwards there is a choice of dealing either with the general two
person game or with the zero sum threeperson game. It will be seen that
our technique of discussion necessitates taking up the zerosum threeperson
1 We do not in the least intend, of course, to detract from the enormous importance
of those discoveries. It is just because of their great power that we are now in a position
to treat this side of the matter as briefly as we do. We are interested in those aspects of
the problem which are not settled by the concept of probability alone; consequently
these and not the satisfactorily settled ones must occupy our attention.
2 Concerning the important connection between the use of mathematical expectation
and the concept of numerical utility, cf . 3.7. and the considerations which precede it.
8 Some games like Roulette are of an even more peculiar character. In Roulette the
mathematical expectation of the players is clearly negative. Thus the motives for
participating in that game cannot be understood if one identifies the monetary return
with utility.
88 ZEROSUM TWOPERSON GAMES: THEORY
game first. After that we shall extend the theory to the zerosum nperson
game (for all n = 1, 2, 3, ) and only subsequently to this will it be
found convenient to investigate the general nperson game.
13. Functional Calculus
13.1. Basic Definitions
13.1.1. Our next objective is as stated in 12.4. the exhaustive dis
cussion of the zerosum twoperson games. In order to do this adequately,
it will be necessary to use the symbolism of the functional calculus or at
least of certain parts of it more extensively than we have done thus far.
The concepts which we need are those of functions, of variables, of maxima
and minima, and of the use of the two latter as functional operations. All
this necessitates a certain amount of explanation and illustration, which
will be given here.
After that is done, we will prove some theorems concerning maxima,
minima, and a certain combination of these two, the saddle value. These
theorems will play an important part in the theory of the zerosum two
person games.
13.1.2. A function </> is a dependence which states how certain entities
%, y, ' ' ' called the variables of <t> determine an entity u called the
value of </>. Thus u is determined by <f> and by the x, y,  , and this
determination i.e. dependence will be indicated by the symbolic equation
u = </>(*, y, ).
In principle it is necessary to distinguish between the function <t> itself
which is an abstract entity, embodying only the general dependence of
u = <t>(x, y,    ) on the x, y,   and its value <t>(x, y,  ) for any
specific x, y, . In practical mathematical usage, however, it is often
convenient to write <t>(x, y,    ) but with x, y,    indeterminate
instead of (cf. the examples (c)(e) below; (a), (b) are even worse, cf.
footnote 1 below).
In order to describe the function < it is of course necessary among
other things to specify the number of its variables x, y,    . Thus
there exist onevariable functions <t>(x), twovariable functions <t>(x, y), etc.
Some examples:
(a) The arithmetical operations x + 1 and x 2 are onevariable functions. l
(b) The arithmetical operations of addition and of multiplication x + y
and xy, are two variable functions. 1
(c) For any fixed k the *(*) of 9.2.4. is a one variable function (of w).
But it can also be viewed as a two variable function (of fc, w).
(d) For any fixed k the S*(K, DJ of (11 :A) in 11.1.3. is a twovariable
function (of ic, D). 2
(e) For any fixed k the 3C*(ri, , r 5 ) of 11.2.3. is a nvariable function
(of n, , r n ). 1
1 Although they do not appear in the above canonical forms <t>(x), <t>(x, y).
1 We could also treat k in (d) and k in (e) like k in (c), i.e. as a variable.
FUNCTIONAL CALCULUS 89
13.1.3. It is equally necessary, in order to describe a function < to
specify for which specific choices of its variables x, y, the value
<t>(%> y> ' ' ' ) is defined at all. These choices i.e. these combinations of
x, y, form the domain of <t>.
The examples (a)(e) show some of the many possibilities for the domains
of functions: They may consist of arithmetical or of analytical entities, as
well as of others. Indeed:
(a) We may consider the domain to consist of all integer numbers,
or equally well of all real numbers.
(b) All pairs of either category of numbers used in (a), form the domain.
(c) The domain is the set Q of all objects * which represent the plays
of the game T (cf. 9.1.1. and 9.2.4.).
(d) The domain consists of pairs of a positive integer K and a set D.
(e) The domain consists of certain systems of positive integers.
A function <f> is an arithmetical function if its variables are positive
integers; it is a numerical function if its variables are real numbers; it is a
setfunction if its variables are sets (as, e.g., D< in (d)).
For the moment we are mainly interested in arithmetical and numerical
functions.
We conclude this section by an observation which is a natural conse
quence of our view of the concept of a function. This is, that the number
of variables, the domain, and the dependence of the value on the varia
bles, constitute the function as such: i.e., if two functions <, \l/ have the
same variables x, y, and the same domain, and if <t>(x, y, ) =
y> ' * * ) throughout this domain, then <, \l/ are identical in all respects. 1
13.2. The Operations Max and Min
13.2.1. Consider a function <t> which has real numbers for values
Assume first that is a onevariable function. If its variable can be
chosen, say as x = z so that <f>(x ) ^ <t>(x') for all other choices x', then we
say that <t> has the maximum <t>(x<>) and assumes it at x = x .
Observe that this maximum </>(z ) is uniquely determined; i.e., the
maximum may be assumed at x = XQ for several z , but they must all fur
nish the same value <t>(xo). 2 We denote this value by Max <t>(x), the maxi
mum value of <t>(x).
If we replace ^ by ^, then the concept of </>'s minimum, <A(x ), obtains,
and of XQ where </> assumes it. Again there may be several such x , but they
must all furnish the same value <t>(x Q ). We denote this value by Min <t>(x),
the minimum value of </>.
1 The concept of a function is closely allied to that of a set, and the above should be
viewed in parallel with the exposition of 8.2.
* Proof: Consider two such a? , say x' and x'J. Then 4(si) *(x' ') and <K*'o')
Hence 4>(x'>)
90 ZEROSUM TWOPERSON GAMES: THEORY
Observe that there is no a priori guarantee that either Max <t>(x) or
Min <(>(x) exist. 1
If, however, the domain of </> over which the variable x may run
consists only of a finite number of elements, then the existence of both
Max <t>(x) and Min <t>(x) is obvious. This will actually be the case for most
functions which we shall discuss. 2 For the remaining ones it will be a
consequence of their continuity together with the geometrical limitations of
their domains. 8 At any rate we are restricting our considerations to such
functions, for which Max and Min exist.
13.2.2. Let now <t> have any number of variables x,y,z, . By sin
gling out one of the variables, say x, and treating the others, y y z,  , as
constants, we can view <t>(x, y, z,    ) as a onevariable function, of the
variable x. Hence we may form Max <t>(x,y,z, ), Min </> (z, t/, 2, )
as in 13.2.1., of course with respect to this x.
But since we could have done this equally well for any one of the other
variables y, z, it becomes necessary to indicate that the operations
Max, Min were performed with respect to the variable x. We do this by
writing Max* </>(z, y, z, ), Min x <(z, y, z, ' ' * ) instead of the incom
plete expressions Max <t>, Min </>. Thus we can now apply to the function
<t>(x> y, z > ' * ' ) an Y ne of the operators Max*, Min z , Max y , Min y , Max*,
Min,, . They are all distinct and our notation is unambiguous.
This notation is even advantageous for one variable functions, and we
will use it accordingly; i.e. we write Max* <(#), Min* <t>(x) instead of the
Max (x), Min 4>(x) of 13.2.1.
Sometimes it will be convenient or even necessary to indicate the
domain S for a maximum or a minimum explicitly. E.g. when the func
tion <t>(x) is defined also for (some) x outside of S, but it is desired to form the
maximum or minimum within S only. In such a case we write
Max xin5 0(x), Min xin5 <t>(x)
instead of Max* <(#), Min x <(z).
In certain other cases it may be simpler to enumerate the values of <f>(x)
say o, 6, than to express <t>(x) as a function. We may then write
1 E.g. if <t>(x) ss x with all real numbers as domain, then neither Max 0(x) nor Min <f>(x)
exist.
2 Typical examples: The functions 3C*(,n, , r n ) of 11.2.3. (or of (e) in 13.1.2.), the
function 3C(ri, TZ) of 14.1.1.
> > > > >
8 Typical examples: The functions K( , 17 ), Max> K( , 17 ), Min K( , 17 ) in
* *
0i ft,
17.4., the functions Min Tl ^/ ^(n, T 2 ) v Max fi ^ 3C(n, r t )ij r in 17.5.2. The vari
r^l^ ^ r t l
ables of all these functions are or j or both, with respect to which subsequent maxima
and minima are formed.
Another instance is discussed in 46.2.1. espec. footnote 1 on p. 384, where the
mathematical background of this subject and its literature are considered. It seems
unnecessary to enter upon these here, since the above examples are entirely elementary.
FUNCTIONAL CALCULUS 91
Max (a, &,)> [Min (a, 6, )] instead of Max* <(z), [Min*
13.2.3. Observe that while </>(x, y, z,  ) is a function of the variables
x, y, z, , Max* <t>(x, y, z,    ), Min* <f>(x, y, *,) are still func
tions, but of the variables y, z, only. Purely typographically, x is
still present in Max, </>(#, y, z,  ), Min* <t>(x, y, z,    ), but it is no
longer a variable of these functions. We say that the operations Max*,
Min* kill the variable x which appears as their index. 2
Since Max* <t>(x, y, z, ), Min* 4>(x, y, z, ) are still functions
of the variables y, z, , 3 we can go on and form the expressions
Maxy Max* <t>(x, y, z, ), Max y Min* <f>(x, y, z,  ),
Min y Max* <(z, y, z,   ), Min y Min* <t>(x, y, z,  ),
We could equally form
Max* Maxy <j>(x, y, z, ), Max* Min y <t>(x, y, z, )
etc. ; 4 or use two other variables than x, y (if there are any) ; or use more
variables than two (if there are any).
In fine, after having applied as many operations Max or Min as there are
variables of <t>(x, y, z, ) in any order and combination, but precisely
one for each variable x, y, z,  we obtain a function of no variables
at all, i.e. a constant.
13.3. Commutativity Questions
13.3.1. The discussions of 13.2.3. provide the basis for viewing the Max*,
Min*, Maxy, Min tf , Max,, Min*, entirely as functional operations, each
one of which carries a function into another function. 5 We have seen that
we can apply several of them successively. In this latter case it is prima
facie relevant, in which order the successive operations are applied.
But is it really relevant? Precisely: Two operations are said to commute
if, in case of their successive application (to the same object), the order in
which this is done does not matter. Now we ask : Do the operations Max*,
Min*, Maxy, Min y , Max,, Min,, all commute with each other or not?
We shall answer this question. For this purpose we need use only two
variables, say x, y and then it is not necessary that <f> be a function of further
variables besides x, y. 6
1 Of course Max (a, b,  ) [Min (a, b, )] is simply the greatest [smallest] one
among the numbers a, b,  .
1 A well known operation in analysis which kills a variable x is the definite integral:
f\
4>(x) is a function of x, but / <j>(x)dx is a constant.
8 We treated fy, z, as constant parameters in 13.2.2. But now that x has been
killed we release the variables y, z, .
4 Observe that if two or more operations are applied, the innermost applies first and
kills its variable; then the next one follows suit, etc
6 With one variable less, since these operations kill one variable each.
8 Further variables of <, if any, may be treated as constants for the purpose of this
analysis.
92 ZEROSUM TWOPERSON GAMES: THEORY
So we consider a t Wo variable function <t>(x, y). The significant ques
tions of commutativity are then clearly these:
Which of the three equations which follow are generally true:
(13:1) Max, Max y 0(x, y) = Max y Max, <t>(x f y),
(13:2) Min, Min y <Kz, y) = Min y Min, <(x, y),
(13:3) Max, Min y </>(x, y) = Min y Max, <t>(x, y). 1
We shall see that (13:1), (13:2) are true, while (13:3) is not; i.e., any two
Max or any two Min commute, while a Max and a Min do not commute in
general. We shall also obtain a criterion which determines in which special
cases Max and Min commute.
This question of commutativity of Max and Min will turn out to be
decisive for the zerosum twoperson game (cf. 14.4.2. and 17.6.).
13.3.2. Let us first consider (13:1). It ought to be intuitively clear that
Max, Max y <f>(x, y) is the maximum of <(x, y) if we treat x, y together as one
variable; i.e. that for some suitable x () , 2/0, </>(x , 2/o) = Max x Max y 0(x, y)
and that for all x', y', <f>(x , y ) ^ <t>(x', y').
If a mathematical proof is nevertheless wanted, we give it here: Choose
xo so that Max y <f>(x, y) assumes its xmaximum at x = x , and then choose j/
so that <t>(xQ, y) assumes its ymaximum at y = y Q . Then
/o) = Max y </>(x , y) = Max, Max y <(x, y),
and for all x', y 1
o, 2/0) = Max y </>(x , y) ^ Max y </>(x', y) ^ <t>(x', y').
This completes the proof.
Now by interchanging x, y we see that Max y Max, <(x, y) is equally
the maximum of </>(x, y) if we treat x, y as one variable.
Thus both sides of (13:1) have the same characteristic property, and
therefore they are equal to each other. This proves (13:1).
Literally the same arguments apply to Mm in place of Max: we need
only use ^ consistently in place of ^ . This proves (13 :2).
This device of treating two variables x, y as one, is occasionally quite
convenient in itself. When we use it (as, e.g., in 18.2.1., with n, r 2 , 3C(ri, r 2 )
in place of our present x, y, <(x, z/)), we shall write Max,, y <(x, y) and
Min,. y <(x, 2/).
13.3.3. At this point a graphical illustration may be useful. Assume
that the domain of <f> for x, y is a finite set. Denote, for the sake of sim
plicity, the possible values of x (in this domain) by 1, , t and those of
y by 1, ,$. Then the values of </(x, y) corresponding to all x, y in this
domain i.e. to all combinations of x = 1, , /, y = 1, , $ can
be arranged in a rectangular scheme: We use a rectangle of t rows and s
1 The combination Min, Max,, requires no treatment of its own, since it obtains from
the above Max, Min y by interchanging x, y.
FUNCTIONAL CALCULUS
93
columns, using the number x = 1, , t to enumerate the former and
the number y = 1, , s to enumerate the latter. Into the field of inter
section of row x and column y to be known briefly as the field x, y we
write the value <t>(x t y) (Fig. 11). This arrangement, known in mathematics
as a rectangular matrix, amounts to a complete characterization of the func
tion <t>(x, y). The specific values <(x, y) are the matrix elements.
1
2
y
s
1
0(1, 1)
$(1, 2)
*>(!, y)
^(l, )
2
d(2, 1)
0(2, 2)
<fr(2, y}
0(2, )
x
4>(x, 1)
<t>(x t 2)
0(z, y}
0(s, s]

t
d(J, 1)
4>(l 2)
<t>(t y}
4>(L a)
Figure 11.
Now MaXy <t>(x, y) is the maximum of <(#, y) in the row x.
Max x
is therefore the maximum of the row maxima. On the other hand,
Max* <t>(x, y)
is the maximum of <f>(x, y) in the column y. Max y Max* <(#, y) is therefore
the maximum of the column maxima. Our assertions in 13.3.2. concerning
(13:1) can now be stated thus: The maximum of the row maxima is the
same as the maximum of the column maxima; both are the absolute maxi
mum of <t>(x, y) in the matrix. In this form, at least, these assertions should
be intuitively obvious. The assertions concerning (13:2) obtain in the
same way if Min is put in place of Max.
13.4. The Mixed Case. Saddle Points
13.4.1. Let us now consider (13:3). Using the terminology of 13.3.3.
the lefthand side of (13:3) is the maximum of the row minima and the
righthand side is the minimum of the column maxima. These two numbers
are neither absolute maxima, nor absolute minima, and there is no prima
fade evidence why they should be generally equal. Indeed, they are not.
Two functions for which they are different are given in Figs. 12, 13. A
94
ZEROSUM TWOPERSON GAMES: THEORY
function for which they are equal is given in Fig. 14. (All these figures
should be read in the sense of the explanations of 13.3.3. and of Fig. 11.)
These figures as well as the entire question of commutativity of Max
and Min will play an essential role in the theory of zerosum twoperson
games. Indeed, it will be seen that they represent certain games which
are typical for some important possibilities in that theory (cf. 18.1.2.).
But for the moment we want to discuss them on their own account, without
any reference to those applications.
t  s  2
t  a  3
1
2
row
minima
1
1
j
j
2
1
1
1
column
1
1
maxima
Maximum of row minima 1
Minimum of column maxima 1
Figure 12.
1
2
3
row
minima
1
1
1
1
2
1
1
1
3
1
1
1
column
1
1
1
maxima
Maximum of row minima 1
Minimum of column maxima 1
Figure 13.
t * 2
1
2
row
minima
1
2
1
2
2
1
2
1
column
1
2
maxima
Maximum of row minima 1
Minimum of column maxima 1
Figure 14.
13.4.2. Since (13:3) is neither generally true, nor generally false, it is
desirable to discuss the relationship of its two sides
(13:4) Max* Min y </>(#, y) y Min y Max x </>(x, y),
more fully. Figs. 1214, which illustrated the behavior of (13:3) to a
certain degree, give some clues as to what this behavior is likely to be.
Specifically:
(13 :A) In all three figures the lefthand side of (13:3) (i.e. the first
expression of (13:4)) is ^ the righthand side of (13:3) (i.e. the
second expression in (13:4)).
FUNCTIONAL CALCULUS 95
(13 :B) In Fig. 14 where (13:3) is true there exists a field in the
matrix which contains simultaneously the minimum of its row
and the maximum of its column. (This happens to be the ele
ment 1 the left lower corner field of the matrix.) In the
other figures 12, 13, where (13:3) is not true, there exists no
such field.
It is appropriate to introduce a general concept which describes the
behavior of the field mentioned in (13 :B). We define accordingly:
Let <f>(x, y) be any twovariable function. Then x , 3/0 is a saddle point
of <f> if at the same time </>(x, y ) assumes its maximum at x = x and <(x , y)
assumes its minimum at y = 3/0
The reason for the use of the name saddle point is this: Imagine the
matrix of all x, y elements (x = 1, ,, y = 1, a; cf. Fig. 11)
as an oreographical map, the height of the mountain over the field x, y
being the value of <t>(x, y) there. Then the definition of a saddle point
x , 2/0 describes indeed essentially a saddle or pass at that point (i.e. over the
field xo, 2/0) ; the row XQ is the ridge of the mountain, and the column i/o is
the road (from valley to valley) which crosses this ridge.
The formula (13:C*) in 13.5.2. also falls in with this interpretation. 1
13.4.3. Figs. 12, 13 show that a <f> may have no saddle points at all.
On the other hand it is conceivable that < possesses several saddle points.
But all saddle points x , 2/0 i? they exist at all must furnish the same
value 0(x , 2/0) 2 We denote this value if it exists at all by Sa x/v <(x, y),
the saddle value of </>(x, y). 8
We now formulate the theorems which generalize the indications of
(13 :A), (13 :B). We denote them by (13 :A*), (13 :B*), and emphasize
that they are valid for all functions <(x, y).
(13 :A*) Always
Max, Min y <(x, y) ^ Min y Max, #(x, y).
(13 :B*) We have
Max* Min y <(x, y) = Mm y Max z </>(x, y)
if and only if a saddle point x , t/o of <#> exists.
13.5. Proofs o! the Main Facts
13.5.1. We define first two sets A*, B+ for every function <(x, 2/)
Min y <t>(x, 2/) is a function of x; let A+ be the set of all those x for which
1 All this is closely connected with although not precisely a special case of; certain
more general mathematical theories involving extremal problems, calculus of variations,
etc. Cf. M . Morse: The Critical Points of Functions and the Calculus of Variations in
the Large, Bull. Am. Math. Society, Jan .Feb. 1929, pp. 38 cont., and What is Analysis in
the Large?, Am. Math. Monthly, Vol. XLIX, 1942, pp. 358 cont.
1 This follows from (13 :C*) in 13.5.2. There exists an equally simple direct proof:
Consider two saddle points a? , y o, say 4, yj and 4', yi'. Then :
'*)  Max, *(*, y'o) fc *(*", vl) Min, *(*' ', y)
i.e.: +(*;, yi) <^x' ', j/ '). Similarly *(*' ', y' ') +(*;, yj).
Hence 0(4, yi)  0(4', yi').
1 Clearly the operation Sa/ V 0(x, y) kills both variables x } y. Cf. 13.2.3.
96 ZEROSUM TWOPERSON GAMES: THEORY
this function assumes its maximum at x = XQ. Max, 4>(x, y) is a function
of y ; let B+ be the set of all those y for which this function assumes its mini
mum at y = 2/0.
We now prove (13:A*), (13:B*).
Proof of (13 :A*) : Choose X Q in A+ and y in B*. Then
Max* Min y <l>(x, y) = Min y <(zo, y) ^ <(zo, 2/0)
g Max, <f>(x, 2/0) = Min y Max* <t>(x, y),
i.e. : Max, Min y <f>(x, y) ^ Min y Max, <t>(x, y) as desired.
Proof of the necessity of the existence of a saddle point in (13:B*):
Assume that
Max, Min y <t>(x, y) = Min y Max, <t>(x, y).
Choose XQ in A* and yo in 5*; then we have
Max, <(z, 2/0) = Min y Max, <t>(x, y)
= Max, Min y <f>(x, y) = Min y
y
Hence for every x f
g Max, <l>(x, y<>) = Min
i.e. <t>(x , i/o) ^ </>(x', 2/0) so </>(z, 2/o) assumes its maximum at x = x .
And for every 2/'
y') ^ Min y ^(a: , y) = Max, </>(x, 2/0) ^ *(^o, 2/o),
i.e. 0(x , 2/0) ^ *(^o, 2/0 so *(^o, 2/) assumes its minimum at y = 2/0.
Consequently these XD, 2/0 form a saddle point.
Proof of the sufficiency of the existence of a saddle point in (13:B*):
Let XQ, 2/0 be a saddle point. Then
Max, Min y </>(z, y) ^ Min y </>(z , 2/) = *(^o, 2/o),
Min y Max, <f>(x, y) ^ Max, <t>(x, 2/0) = </>(z , t/o),
hence
Max, Min y <^(a:, y) ^ *(z , 2/0) ^ Min y Max, 0(z, 2/).
Combining this with (13 :A*) gives
Max, Min y <t>(x, y) = <t>(x , 2/0) = Min y Max, <t>(x, y),
and hence the desired equation.
13.5.2. The considerations of 13.5.1. yield some further results which
are worth noting. We assume now the existence of saddle points, i.e.
the validity of the equation of (13 :B*).
For every saddle point a; , yo
(13:C*) 0(x , 2/0) = Max, Min y <t>(x, y) = Min y Max, <t>(x, y).
Proof: This coincides with the last equation of the sufficiency proof of
(13:B*) in 13.5.1.
FUNCTIONAL CALCULUS 97
(13 :D*) x , 2/0 is a saddle point if and only if x belongs to A* and yo
belongs to B*. 1
Proof of sufficiency: Let x belong to A* and y* belong to B+. Then the
necessity proof of (13:B*) in 13.5.1. shows exactly that this x , yo is a saddle
point.
Proof of necessity: Let xo, 2/0 be a saddle point. We use (13 :C*). For
every x'
Min y 0(s', y) ^ Max, Min y <t>(x, y) = <t>(x Q , y<>) = Min y 4>(z<>, y),
i.e. Min y <(x , t/) ^ Min y <(x', i/) so Min y <(x, y) assumes its maximum
at x = x . Hence x belongs to A+. Similarly for every y'
Max* <(>(x, i/O ^ Min y Max* <(x, y) = ^(x 0 j/o) = Max* 0(x, 2/0),
i.e. Max x <(x, 2/0) ^ Max z <t>(x, y') so Max* 0(x, y) assumes its minimum
at y = I/O Hence y belongs to B*. This completes the proof.
The theorems (13:C*) ; (13:D*) indicate, by the way, the limitations
of the analogy described at the end of 13.4.2.; i.e. they show that our con
cept of a saddle point is narrower than the everyday (oreographical) idea
of a saddle or a pass. Indeed, (13 :C*) states that all saddles provided
that they exist are at the same altitude. And (13 :D*) states if we depict
the sets A*, B+ as two intervals of numbers 2 that all saddles together are
an area which has the form of a rectangular plateau. 8
13.5.3. We conclude this section by proving the existence of a saddle
point for a special kind of x, y and <(x, y). This special case will be seen
to be of a not inconsiderable generality. Let a function ^(x, u) of two
variables x, u be given. We consider all functions /(x) of the variable
which have values in the domain of u. Now we keep the variable x but
in place of the variable u we use the function / itself. 4 The expression
lK*/(z)) is determined by x,/; hence we may treat ^(x,/(ar)) as a function of
the variables x, / and let it take the place of <(x, y}.
We wish to prove that for these x, / and ^(x, /(x)) in place of x, y
and <f>(x y y) a saddle point exists; i.e. that
(13 :E) Max x Min/ ^(x, /(x)) = Min/ Max x f(x, /(x)).
Proof: For every x choose a UQ with ^(x, u ) = Min u ^(x, w). This w
depends on x, hence we can define a function / by w = /o(z). Thus
^/(x, /o(z)) = Min M ^(x, u). Consequently
Max* ^(x, /o(x)) = Max* Min u ^(x, u).
1 Only under our hypothesis at the beginning of this section! Otherwise there exist
no saddle points at all.
1 If the x, y are positive integers, then this can certainly be brought about by two
appropriate permutations of their domain.
3 The general mathematical concepts alluded to in footnote 1 on p. 95 are free from
these limitations. They correspond precisely to the everyday idea of a pass.
4 The reader is asked to visualize this: Although itself a function, / may perfectly well
be the variable of another function.
98 ZEROSUM TWOPERSON GAMES: THEORY
A fortiori,
(13 :F) Min/ Max, f(x, /(*)) ^ Max* Min u f(x, u).
Now Min/ ^(z, f(x)) is the same thing as Min tt \(/(x y u) since / enters into
this expression only via its value at the one place x, i.e. f(x), for which we
may write u. So Min/ \l/(x, f(x)) = Min M ^(x, w) and consequently,
(13 :G) Max* Min/ $(x, /(#)) = Max x Min M ^(z, u).
(13:F), (13:G) together establish the validity of a ^ in (13:E). The ^
in (13 :E) holds owing to (13 :A*). Hence we have = in (13 :E), i.e. the
proof is completed.
14. Strictly Determined Games
14.1. Formulation of the Problem
14.1.1. We now proceed to the consideration of the zerosum twoperson
game. Again we begin by using the normalized form.
According to this the game consists of two moves: Player 1 chooses a
number TI =!, , 0i, player 2 chooses a number r 2 = !, , # 2 , each
choice being made in complete ignorance of the other, and then players
1 and 2 get the amounts 3Ci(ri, r 2 ) and 3C 2 (Ti, T 2 ), respectively. 1
Since the game is zerosum, we have, by 11.4.
3Ci(Ti, TI) + JC(T,, TI) = 0.
We prefer to express this by writing
3Cl(Ti, T 2 ) 35 3C(Ti, Tl), JCl(Ti, T 2 ) = JC^i, T 2 ).
We shall now attempt to understand how the obvious desires of the
players 1, 2 will determine the events, i.e. the choices TI, r 2 .
It must again be remembered, of course, that TI, r 2 stand ultima analysi
not for a choice (in a move) but for the players' strategies; i.e. their entire
"theory" or "plan" concerning the game.
For the moment we leave it at that. Subsequently we shall also go
"behind" the TI, T 2 and analyze the course of a play.
14.1.2. The desires of the players 1, 2, are simple enough. I wishes
to make 5Ci(ri, T 2 ) s= JC(TI, T 2 ) a maximum, 2 wishes to make 3C 2 (Ti, T 2 ) s
#C(TI, T 2 ) a maximum; i.e. 1 wants to maximize and 2 wants to minimize
3C(n, T t ).
So the interests of the two players concentrate on the same object: the
one function 3C(Ti, T 2 ). But their intentions are as is to be expected in a
zerosum twoperson game exactly opposite : 1 wants to maximize, 2 wants
to minimize. Now the peculiar difficulty in all this is that neither player
has full control of the object of his endeavor of 3C(ri, T 2 ) i.e. of both
its variables TI, T 2 . 1 wants to maximize, but he controls only TI; 2 wants to
minimize, but he controls only T 2 : What is going to happen?
l Cf. (ll:D)in 11.2.3.
STRICTLY DETERMINED GAMES
99
The difficulty is that no particular choice of, say TI, need in itself make
3C(ri, r 2 ) either great or small. The influence of r\ on 3C(ri, r 2 ) is, in general,
no definite thing; it becomes that only in conjunction with the choice of
the other variable, in this case r 2 . (Cf. the corresponding difficulty in
economics as discussed in 2.2.3.)
Observe that from the point of view of the player 1 who chooses a
variable, say n, the other variable can certainly not be considered as a
chance event. The other variable, in this case r 2 , is dependent upon the
will of the other player, which must be regarded in the same light of " ration
ality " as one's own. (Cf. also the end of 2.2.3. and 2.2.4.)
14.1.3. At this point it is convenient to make use of the graphical
representation developed in 13.3.3. We represent 3C(ri, r 2 ) by a rectangular
matrix: We form a rectangle of pi rows and 2 columns, using the number
n = 1, , #1 to enumerate the former, and the number r 2 = 1, , 2
to enumerate the latter; and into the field n, r 2 we write the matrix element
3C(ri, r 2 ). (Cf. with Figure 11 in 13.3.3. The </>, x, y, t, s there correspond
to our OC, TI, r 2 , 01, 2 (Figure 15).)
1
2
7>2
1
3C(1, 1)
JC(1, 2)
3C(1 rj)
3C(1. 00
2
3C(2, 1)
3C(2, 2)
3C(2, T 2 )
3C(2, 0)
71
3C(ri, 1)
3C(n, 2)
3C(T1, T)
3C(n, 2 )
0i
C(0i, 1)
JC(0i, 2)
3C(#i. n)
3C(0i, 0i)
Figure 15.
It ought to be understood that the function 3C(ri, r 2 ) is subject to no
restrictions whatsoever; i.e., we are free to choose it absolutely at will. 1
Indeed, any given function 5C(ri, r 2 ) defines a zerosum twoperson game
in the sense of (11 :D) of 11.2.3. by simply defining
3Ci(T!, T 2 ) s JC(TI, r 2 ), 3C 2 (ri, r 2 ) s 3C(ri, r 2 )
(cf. 14.1.1.). The desires of the players 1, 2, as described above in the
last section, can now be visualized as follows: Both players are solely
1 The domain, of course, is prescribed: It consists of all pairs n, TI with n 1, ,
; n = 1, , 0t. This is a finite set, so all Max and Min exist, cf. the end of 13.2.1.
100 ZEROSUM TWOPERSON GAMES: THEORY
interested in the value of the matrix element JC(ri, r 2 ). Player 1 tries to
maximize it, but he controls only the row, i.e. the number n. Player 2
tries to minimize it, but he controls only the column, i.e. the number r 2 .
We must now attempt to find a satisfactory interpretation for the out
come of this peculiar tugofwar. 1
14.2. The Minorant and the Majorant Gaifces
14.2. Instead of attempting a direct attack on the game F itself for
which we are not yet prepared let us consider two other games, which are
closely connected with F and the discussion of which is immediately feasible.
The difficulty in analyzing F is clearly that the player 1, in choosing n
does not know what choice r 2 of the player 2 he is going to face and vice
versa. Let us therefore compare F with other games where this difficulty
does not arise.
We define first a game Fi, which agrees with F in every detail except that
player 1 has to make his choice of TI before player 2 makes his choice of
T 2 , and that player 2 makes his choice in full knowledge of the value given
by player 1 to TI (i.e. Ts move is preliminary to 2's move). 2 In this game
Fi player 1 is obviously at a disadvantage as compared to his position
in the original game F. We shall therefore call Fi the minor ant game of F.
We define similarly a second game F 2 which again agrees with F in every
detail except that now player 2 has to make his choice of r 2 before player 1
makes his choice of r\ and that 1 makes his choice in full knowledge
of the value given by 2 to r 2 (i.e. 2's move is preliminary to 1's move). 3 In
this game F 2 the player 1 is obviously at an advantage as compared to
his position in the game F. We shall therefore call F 2 the majorant game
of F.
The introduction of these two games FI, F 2 achieves this: It ought to
be evident by common sense and we shall also establish it by an exact
discussion that for FI, F 2 the "best way of playing" i.e. the concept of
rational behavior has a clear meaning. On the other hand, the game F
lies clearly "between" the two games FI, F 2 ; e.g. from 1's point of view FI
is always less and F 2 is always more advantageous than F. 4 Thus FI, F 2
may be expected to provide lower and upper bounds for the significant
quantities concerning F. We shall, of course, discuss all this in an entirely
precise form. A priori, these "bounds" could differ widely and leave a
considerable uncertainty as to the understanding of F. Indeed, prima fade
this will seem to be the case for many games. But we shall succeed in
manipulating this technique in such a way by the introduction of certain
1 The point is, of course, that this is not a tugofwar. The two players have opposite
interests, but the means by which they have to promote them are not in opposition to
each other. On the contrary, these " means " i.e. the choices of n, TJ are apparently
independent. This discrepancy characterizes the entire problem.
1 Thus Ti while extremely simple is no longer in the normalized form.
* Thus TI while extremely simple is no longer in the normalized form.
4 Of course, to be precise we should say "less than or equal to" instead of "less," and
"more than or equal to" instead of "more."
STRICTLY DETERMINED GAMES 101
further devices as to obtain in the end a precise theory of F, which gives
complete answers to all questions.
14.3. Discussion of the Auxiliary Games
14.3.1. Let us first consider the minorant game IV After player 1
has made his choice TI the player 2 makes his choice T 2 in full knowledge
of the value of n. Since 2's desire is to minimize JC(ri, T 2 ), it is certain that
he will choose T 2 so as to make the value of 5C(ri, r 2 ) a minimum for this TI.
In other words: When 1 chooses a particular value of TI he can already foresee
with certainty what the value of 3C(ri, r 2 ) will be. This will be Min Tj X(TI, r 2 ). l
This is a function of TI alone. Now 1 wishes to maximize 3C(ri, r 2 ) and
since his choice of TI is conducive to the value Min Ta 3C(ri, r 2 ) which depends
on TI only, and not at all on T 2 so he will choose TI so as to maximize
Min Tj JC(TI, T 2 ). Thus the value of this quantity will finally be
Tf Min Tj JC(TI, T 2 ). 2
Summing up:
(14:A:a) The good way (strategy) for 1 to play the minorant game
T i is to choose TI, belonging to the set A, A being the set of
those TI for which Min Tj 3C(n, T 2 ) assumes its maximum value
Max Tj Min, t OC(TI, T 2 ).
(14:A:b) The good way (strategy) for 2 to play is this: If 1 has
chosen a definite value of TI, S then T 2 should be chosen belong
ing to the set B r ^ B Ti being the set of those T 2 for which
3C(ri, T 2 ) assumes its minimum value Min Tj 3C(n, T 2 ). 4
On the basis of this we can state further:
(14:A:c) If both players 1 and 2 play the minorant game Fj well,
i.e. if TI belongs to A and T 2 belongs to B Ti then the value of
JC(TI, T 2 ) will be equal to
vi = Max Tj Min Tj JC(TI, T 2 ).
1 Observe that TZ may not be uniquely determined: For a given n the T2function
3C(ri, T2) may assume its Tjmmimum for several values of T^ The value of 3C(n, T2)
will, however, be the same for all these T2, namely the uniquely defined minimum value
Minr t 3C(n,ri). (Cf. 13.2.1.)
2 For the same reason as in footnote 1 above, the value of n may not be unique, but the
value of Min Tf 3C(ri, ri) is the same for all n in question, namely the uniquelydefined
maximum value
Max Tl Min Tl 3C(n r 2 ).
8 2 is informed of the value of n when called upon to make his choice of r 2 , this is
the rule of PI. It follows from our concept of a strategy (cf. 4.1.2. and end of 11.1.1.)
that at this point a rule must be provided for 2's choice of T2 for every value of n, irre
spective of whether 1 has played well or not, i.e. whether or not the value chosen belongs
to A.
4 In all, this n is treated as a known parameter on which everything depends, includ
ing the set Br t from which n ought to be chosen.
102 ZEROSUM TWOPERSON GAMES: THEORY
The truth of the above assertion is immediately established in the mathe
matical sense by remembering the definitions of the sets A and B V and by
substituting accordingly in the assertion. We leave this exercise which
is nothing but the classical operation of " substituting the defining for the
defined" to the reader. Moreover, the statement ought to be clear by
common sense.
The entire discussion should make it clear that every play of the game
Fi has a definite value for each player. This value is the above Vi for the
player 1 and therefore Vi for the player 2.
An even more detailed idea of the significance of Vi is obtained in this
way:
(14:A:d) Player 1 can, by playing appropriately, secure for himself
a gain ^ Vi irrespective of what player 2 does. Player 2
can, by playing appropriately, secure for himself a gain ^ Vi,
irrespective of what player 1 does.
(Proof: The former obtains by any choice of TI in A. The latter obtains
by any choice of T^ in B T .* Again we leave the details to the reader; they
are altogether trivial.)
The above can be stated equivalently thus:
(14:A:e) Player 2 can, by playing appropriately, make it sure that
the gain of player 1 is ^ Vi, i.e. prevent him from gaming
> Vi irrespective of what player 1 does. Player 1 can, by play
ing appropriately, make it sure that the gain of player 2 is
g Vi, i.e. prevent him from gaining > Vi irrespective of
what player 2 does.
14.3.2. We have carried out the discussion of FI in rather profuse detail
although the " solution" is a rather obvious one. That is, it is very likely
that anybody with a clear vision of the situation will easily reach the same
conclusions "unmathematically," just by exercise of common sense.
Nevertheless we felt it necessary to discuss this case so minutely because
it is a prototype of several others to follow where the situation will be much
less open to " unmathematical " vision. Also, all essential elements of
complication as well as the bases for overcoming them are really present
in this very simplest case. By seeing their respective positions clearly
in this case, it will be possible to visualize them in the subsequent, more
complicated, ones. And it will be possible, in this way only, to judge
precisely how much can be achieved by every particular measure.
14.3.3. Let us now consider the majorant game F2.
F2 differs from Fi only in that the roles of players 1 and 2 are inter
changed : Now player 2 must make his choice r 2 first and then the player 1
makes his choice of r } in full knowledge of the value of r 2 .
1 Recall that n must be chosen without knowledge of r 2 , while r 2 is chosen in full
knowledge of n.
STRICTLY DETERMINED GAMES 103
But in saying that F 2 arises from FI by interchanging the players 1 and 2,
it must be remembered that these players conserve in the process their
respective functions 3Ci(ri, r 2 ), aC 2 (rj, r 2 ), i.e. 3C(ri, r 2 ), 3C(ri, r 2 ). That
is, 1 still desires to maximize and 2 still desires to minimize X(TI, r 2 ).
These being understood, we can leave the practically literal repetition
of the considerations of 14.3.1. to the reader. We confine ourselves to
restating the significant definitions, in the form in which they apply to F 2 .
(14:B:a) The good way (strategy) for 2 to play the majorant game
F 2 is to choose r 2 belonging to the set JB, B being the set of
those r 2 for which Max Tf 3C(ri, r 2 ) assumes its minimum value
Min Tj Max Tj X(TI, r 2 ).
(14:B:b) The good way (strategy) for 1 to play is this: If 2 has
chosen a definite value of T 2 ,* then TI should be chosen belong
ing to the set A Ti , A T ^ being the set of those r\ for which
3C(ri, r 2 ) assumes its maximum value Max Ti 3C(ri, r 2 ). 2
On the basis of this we can state further:
(14:B:c) If both players 1 and 2 play the majorant game F 2 well,
i.e. if r 2 belongs to B and TI belongs to A Tj then the value of
5C(ri, T 2 ) will be equal to
v 2 = Min tt Max Ti 5C(ri, r 2 ).
The entire discussion should make it clear that every play of the game
F 2 has a definite value for each player. This value is the above v 2 for the
player 1 and therefore v 2 for the player 2.
In order to stress the symmetry of the entire arrangement, we repeat,
mutatis mutandis, the considerations which concluded 14.3.1. They now
serve to give a more detailed idea of the significance of v 2 .
(14:B:d) Player 1 can, by playing appropriately, secure for himself
a gain ^ v 2 , irrespective of what player 2 does. Player 2
can, by playing appropriately, secure for himself a gain
i v 2 , irrespective of what player 1 does.
(Proof: The latter obtains by any choice of r 2 in B. The former obtains
by any choice of TI in A T ^ Cf. with the proof, loc. cit.)
The above can again be stated equivalently thus :
(14:B:e) Player 2 can, by playing appropriately, make it sure
that the gain of player 1 is g v 2 , i.e. prevent him from gaining
1 1 is informed of the value of T* when called upon to make his choice of n this is the
rule of r a (Cf. footnote 3 on p. 101).
2 In all this r a is treated as a known parameter on which everything depends, including
the set A T from which n ought to be chosen.
8 Remember that rj must be chosen without any knowledge of n, while n is chosen
with full knowledge of TI.
104 ZEROSUM TWOPERSON GAMES: THEORY
> v 2 , irrespective of what player 1 does. Player 1 can, by
playing appropriately, make it sure that the gain of player 2
is ^ v 2 , i.e. prevent him from gaining > v 2 , irrespective
of what player 2 does.
14.3.4. The discussions of Fi and F 2 , as given in 14.3.1. and 14.3.3.,
respectively, are in a relationship of symmetry or duality to each other;
they obtain from each other, as was pointed out previously (at the begin
ning of 14.3.3.) by interchanging the roles of the players 1 and 2. In itself
neither game FI nor F 2 is symmetric with respect to this interchange; indeed,
this is nothing but a restatement of the fact that the interchange of the
players 1 and 2 also interchanges the two games Fi and F 2 , and so modifies
both. It is in harmony with this that the various statements which we
made in 14.3.1. and 14.3.3. concerning the good strategies of FI and F 2 ,
respectively i.e. (14:A:a), (l4:A:b), (14:B:a), (14:B:b), loc. cit. were
not symmetric with respect to the players 1 and 2 either. Again we see:
An interchange of the players 1 and 2 interchanges the pertinent definitions
for Fi and F 2 , and so modifies both. 1
It is therefore very remarkable that the characterization of the value
of a play (vi for Fi, v 2 for F 2 ), as given at the end of 14.3.1 . and 14.3.3. i.e.
(14:A:c), (14:A:d), (14:A:e), (14:B:c), (14:B:d), (14:B:e), loc. cit. (except
for the formulae at the end of (14:A:c) and of (14:B:c)) are fully
symmetric with respect to the players 1 and 2. According to what was
said above, this is the same thing as asserting that these characterizations
are stated exactly the same way for Fi and F 2 . 2 All this is, of course, equally
clear by immediate inspection of the relevant passages.
Thus we have succeeded in defining the value of a play in the same way
for the games FI and F 2 , and symmetrically for the players 1 and 2: in
(14:A:c), (14:A:d), (14:A:e), (14:B:c), (14:B:d) and (14:B:e) in 14.3.1.
and in 14.3.3., this in spite of the fundamental difference of the individual
role of each player in these two games. From this we derive the hope
that the definition of the value of a play may be used in the same form for
other games as well in particular for the game F which, as we know,
occupies a middle position between FI and F 2 . This hope applies, of
course, only to the concept of value itself, but not to the reasonings which
lead to it; those were specific to Fi and F 2 , indeed different for Fi and for
F 2 , and altogether impracticable for F itself; i.e., we expect for the future
more from (14:A:d), (14:A:e), (14:B:d), (14:B:e) than from (14:A:a),
(14:A:b), (14:B:a), (14:B:b).
1 Observe that the original game T was symmetric with respect to the two players
1 and 2, if we let each player take his function 3Ci(n, r a ), 3Cs(n, r 2 ) with him in an inter
change; i.e. the personal moves of 1 and 2 had both the same character in P.
For a narrower concept of symmetry, where the functions 3Ci(n, T), JCs(ri, r) are
held fixed, cf. 14.6.
2 This point deserves careful consideration : Naturally these two characterizations
must obtain from each other by interchanging the roles of the players 1 and 2. But in
this case the statements coincide also directly when no interchange of the players is made
at all. This is due to their individual symmetry.
STRICTLY DETERMINED GAMES 105
These are clearly only heuristic indications. Thus far we have not
even attempted the proof that a numerical value of a play can be defined
in this manner for F. We shall now begin the detailed discussion by which
this gap will be filled. It will be seen that at first definite and serious
difficulties seem to limit the applicability of this procedure, but that it will
be possible to remove them by the introduction of a new device (Cf. 14.7.1.
and 17.1.17.3., respectively).
14.4. Conclusions
14.4.1. We have seen that a perfectly plausible interpretation of the
value of a play determines this quantity as
Vi = Max T( Min Tj OC(ri, r 2 ),
v 2 = Min Tj Max Ti 3C(ri, r 2 ),
for the games FI, F 2 , respectively, as far as the player 1 is concerned. 1
Since the game FI is less advantageous for 1 than the game F 2 in FI
he must make his move prior to, and in full view of, his adversary, while in
F 2 the situation is reversed it is a reasonable conclusion that the value
of Fi is less than, or equal to (i.e. certainly not greater than) the value of F 2 .
One may argue whether this is a rigorous " proof." The question whether
it is, is hard to decide, but at any rate a close analysis of the verbal argu
ment involved shows that it is entirely parallel to the mathematical proof
of the same proposition which we already possess. Indeed, the proposition
in question,
Vi ^ V 2
coincides with (13 :A*) in 13.4.3. (The tf>, x, y there correspond to our
3C, TI, T 2 .)
Instead of ascribing Vi, v 2 as values to two games FI and F 2 different
from F we may alternatively correlate them with F itself, under suitable
assumptions concerning the " intellect" of the players 1 and 2.
Indeed, the rules of the game F prescribe that each player must make
his choice (his personal move) in ignorance of the outcome of the choice
of his adversary. It is nevertheless conceivable that one of the players,
say 2, " finds out" his adversary; i.e., that he has somehow acquired the
knowledge as to what his adversary's strategy is. 2 The basis for this
knowledge does not concern us; it may (but need not) be experience from
previous plays. At any rate we assume that the player 2 possesses this
knowledge. It is possible, of course, that in this situation 1 will change
his strategy; but again let us assume that, for any reason whatever, he
does not do it. 8 Under these assumptions we may then say that player 2
has " found out" his adversary.
1 For player 2 the values are consequently vi, v.
* In the game T which is in the normalized form the strategy is just the actual
choice at the unique personal move of the player. Remember how this normalized form
was derived from the original extensive form of the game; consequently it appears that
this choice corresponds equally to the strategy in the original game.
1 For an interpretation of all these assumptions! cf. 17.3.1.
106 ZEROSUM TWOPERSON GAMES: THEORY
In this case, conditions in F become exactly the same as if the game were
Fi, and hence all discussions of 14.3.1. apply literally.
Similarly, we may visualize the opposite possibility, that player 1 has
" found out" his adversary. Then conditions in F become exactly the same
as if the game were F 2 ; and hence all discussions of 14.3.3. apply literally.
In the light of the above we can say:
The value of a play of the game F is a welldefined quantity if one of the
following two extreme assumptions is made: Either that player 2 " finds
out" his adversary, or that player 1 " finds out" his adversary. In the
first case the value of a play is Vi for 1, and vi for 2; in the second case
the value of a play is V2 for 1 and V2 for 2.
14.4.2. This discussion shows that if the value of a play of F itself
without any further qualifications or modifications can be defined at all,
then it must lie between the values of Vi and v 2 . (We mean the values
for the player 1.) I.e. if we write v for the hopedfor value of a play of F
itself (for player 1), then there must be
Vi 5jj V ^ V 2 .
The length of this interval, which is still available for v, is
A = v 2 Vi ^ 0.
At the same time A expresses the advantage which is gained (in the
game F) by "finding out" one's adversary instead of being " found out"
by him. 1
Now the game may be such that it does not matter which player " finds
out" his opponent; i.e., that the advantage involved is zero. According
to the above, this is the case if and only if
A =
or equivalently
Vi = V 2
Or, if we replace Vi, v 2 by their definitions:
Max Tj Min Tj 3C(ri, r 2 ) = Min T2 Max Tj JC(TI, r 2 ).
If the game F possesses these properties, then we call it strictly determined.
The last form of this condition calls for comparison with (13:3) in 13.3.1.
and with the discussions of 13. 4.1. 13. 5.2. (The 0, x, y there again corre
spond to our 3C, 7i, r 2 ). Indeed, the statement of (13:B*) in 13.4.3. says
that the game F is strictly determined if and only if a saddle point of
3C(ri, T 2 ) exists.
14.5. Analysis of Strict Determinateness
14.5.1. Let us assume the game F to be strictly determined; i.e. that a
saddle point of 3C(ri, rj) exists.
1 Observe that this expression for the advantage in question applies for both players:
The advantage for the player 1 is v vi; for the player 2 it is ( ~vi) ( v 2 ) and these
two expressions are equal to each other, i.e. to A.
STRICTLY DETERMINED GAMES 107
In this case it is to be hoped considering the analysis of 14.4.2. that
it will be possible to interpret the quantity
v = Vi = v 2
as the value of a play of r (for the player 1). Recalling the definitions of
Vi, v 2 and the definition of the saddle value in 13.4.3. and using (13:C*) in
13.5.2., we see that the above equation may also be written as
v = Max Ti Min Tj 3C(7i, r 2 ) == Min Tj Max^ OC(TI, r 2 )
= Sa r/rj 3C(ri, T 2 ).
By retracing the steps made at the end of 14.3.1. and at the end of 14.3.3.
it is indeed not difficult to establish that the above can be interpreted as
the value of a play of F (for player 1).
Specifically: (14:A:c), (14:A:d), (14:A:e), (14:B:c), (14:B:d), (14:B:e)
of 14.3.1. and 14.3.3. where they apply to FI and F 2 respectively, can now
be obtained for F itself. We restate first the equivalent of (14:A:d) and
(14:B:d):
(14:C:d) Player 1 can, by playing appropriately, secure for himself
a gain ^ v, irrespective of what player 2 does.
Player 2 can, by playing appropriately, secure for himself
a gain ^ v irrespective of what player 1 does.
In order to prove this, we form again the set A of (14:A:a) in 14.3.1.
and the set B of (14:B:a) in 14.3.3. These are actually the sets A+, B+ of
13.5.1. (the <t> corresponds to our 5C). We repeat:
(14:D:a) A is the set of those TI for which Min Ta 3C(ri, r 2 ) assumes its
maximum value; i.e. for which
Min Ti JC(ri, T 2 ) = Max Tj Min rj 3C(n, r 2 ) = v.
(14:D:b) B is the set of those r 2 for which Max Tj 3C(ri, r 2 ) assumes
its minimum value; i.e. for which
Max Ti 3C(ri, T 2 ) = Min Tj Max Ti JC(TI, n) = v.
Now the demonstration of (14:C:d) is easy:
Let player 1 choose TI from A. Then irrespective of what player 2
does, i.e. for every r 2 , we have X(TI, r 2 ) ^ Min Tj JC(TI, r 2 ) = v, i.e., 1's gain
is ^ v.
Let player 2 choose r 2 from B. Then, irrespective of what player 1
does, i.e. for every n, we have JC(n, r 2 ) ^ Max Tj 3C(ri y T 2 ) = v, i.e. 1's gain
is ^ v and so 2*s gain is ^ v.
This completes the proof.
We pass now to the equivalent of (14:A:e) and (14:B:e). Indeed,
(14:C:d) as formulated above can be equivalently formulated thus:
(14:C:e) Player 2 can, by playing appropriately, make it sure that
the gain of player 1 is : v, i.e. prevent him from gaining > v
irrespective of what player 1 does.
108 ZEROSUM TWOPERSON GAMES: THEORY
Player 1 can, by playing appropriately, make it sure thai
the gain of player 2 is ^ v i.e. present him from gaining
> v irrespective of what player 2 does.
(14:C:d) and (14:C:e) establish satisfactorily our interpretation of v as the
value of a play of T for the player 1, and of v for the player 2.
14.5.2. We consider now the equivalents of (14:A:a), (14:A:b), (14:B:a),
(14:B:b).
Owing to (14:C:d) in 14.5.1. it is reasonable to define a good way for 1
to play the game F as one which guarantees him a gain which is greater
than or equal to the value of a play for 1, irrespective of what 2 does; i.e. a
choice of TI for which X(TI, T 2 ) ^ v for all r 2 . This may be equivalently
stated as Min Tj 3C(rj, r 2 ) ^ v.
Now we have always Min Tf 3C(ri, T 2 ) ^ Max Ti Min Tj 3C(ri, r 2 ) = v.
Hence the above condition for TI amounts to Min T 3C(ri, T 2 ) = v, i.e.
(by (14:D:a) in 14.5.1.) to r\ being in A.
Again, by (14:C:d) in 14.5.1. it is reasonable to define the good way for
2 to play the game T as one which guarantees him a gain which is greater
than or equal to the value of a play for 2, irrespective of what 1 does; i.e.
a choice of r 2 for which 3C(ri, T 2 ) ^ v for all TI. That is, JC(TI, T 2 ) ^ v
for all TI. This may be equivalently stated as Max Tj 5C(Ti, T Z ) ^ v.
Now we have always Max Tj 3C(Ti, T 2 ) ^ Min Tj Max T OC(TI, T 2 ) = v.
Hence the above conditions for T 2 amounts to Max Ti OC(TI, T 2 ) = v, i.e. (by
(14:D:b) in 14.5.1.) to T 2 being in B.
So we have:
(14:C:a) The good way (strategy) for 1 to play the game F is to
choose any TI belonging to A, A being the set of (14:D:a) in
14.5.1.
(14:C:b) The good way (strategy) for 2 to play the game F is to
choose any T 2 belonging to B, B being the set of (14:D:b)
in 14.5.1. 1
Finally our definition of the good way of playing, as stated at the
beginning of this section, yields immediately the equivalent of (14:A:c)
or (14:B:c):
(14:C:c) If both players 1 and 2 play the game F well i.e. if r\
belongs to A and T 2 belongs to B then the value of 3C(n, T 2 )
will be equal to the value of a play (for 1), i.e. to v.
We add the observation that (13 :D*) in 13.5.2. and the remark concerning
the sets A, B before (14:D:a), (14:D:b) in 14.5.1. together give this:
(14:C:f) Both players 1 and 2 play the game F well i.e. TI belongs
to A and T 2 belongs to B if and only if TI, T 2 is a saddle point
of 3C(T,, T 2 ).
1 Since this is the game F each player must make his choice (of n or r t ) without
knowledge of the other player's choice (of TI or n). Contrast this with (14:A:b) in
14.3.1. for Ti and with (14:B:b) in 14.3.3. fo* T f .
STRICTLY DETERMINED GAMES 109
14.6. The Interchange of Players. Symmetry
14.6. (14:C:a)(14:C:f) in 14.5.1. and 14.5.2. settle everything as far
as the strictly determined twoperson games are concerned. In this
connection let us remark that in 14.3.1., 14.3.3. for Ti, T 2 we derived
(14:A:d), (14:A:e), (14:B:d), (14:B:e) from (14:A:a), (14:A:b), (14:B:a),
(14:B:b) while in 14.5.1., 14.5.2. for T itself we obtained (14:C:a),
(14:C:b) from (14:C:d), (14:C:e). This is an advantage since the argu
ments of 14.3.1., 14.3.3. in favor of (14:A:a), (14:A:b), (14:B:a), (14:B:b)
were of a much more heuristic character than those of 14.5.1., 14.5.2. in
favor of (14:C:d), (14:C:e).
Our use of the function 3C(ri, r 2 ) = 3Ci(ri, r 2 ) implies a certain asymmetry
of the arrangement; the player 1 is thereby given a special role. It ought
to be intuitively clear, however, that equivalent results would be obtained
if we gave this special role to the player 2 instead. Since interchanging
the players 1 and 2 will play a certain role later, we shall nevertheless give a
brief mathematical discussion of this point also.
Interchanging the players 1 and 2 in the game T of which we need
not assume now that it is strictly determined amounts to replacing the
functions 5Ci(ri, r 2 ), 3C 2 (ri, r 2 ) by 3C 2 (r 2 , TI), JCi(r 2 , ri). 1  2 It follows, there
fore, that this interchange means replacing the function 3C (r i, r 2 ) by 3C (r 2 , r i) .
Now the change of sign has the effect of interchanging the operations
Max and Min. Consequently the quantities
Max Ti Min Tj OC(T], ? 2 ) = Vi,
Min rf Max Ti 5C(ri, T 2 ) = v 2 ,
as defined in 14.4.1. become now
Max Ti Min Tj [3C(> 2 , TI)] = Min Ti Max T JC(r 2 , TI)
= Min t Max Ti JC(TI, r 2 ) 8 = v 2 .
Min Tj Max Ti [ 3C(T 2 , TI)] = Max Tj Min Ti OC(r 2 , TI)
= Max Ti Min Tj JC(TI, T 2 ) 8 = VL
So Vi, v 2 become v 2 , Vi. 4 Hence the value of
A = v 2  Vi = (Vi)  (v 2 )
1 This is no longer the operation of interchanging the players used in 14.3.4. There
we were only interested in the arrangement and the state of information at each move,
and the players 1 and 2 were considered as taking their f unctions 3Ci(n, r 2 ) and 3Cj(ri, TI)
with them (cf. footnote 1 on p. 104). In this sense r was symmetric, i.e. unaffected by
that interchange (id.).
At present we interchange the roles of the players 1 and 2 completely, even in their
f unctions 3Ci(n, r a ) and 3C 2 (Ti, r 2 ).
* We had to interchange the variables n, r t since r\ represents the choice of player 1
and T2 the choice of player 2. Consequently it is now T which has the domain 1, , 0i.
Thus it is again true for 3C*(r2, TI) as it was before for3C*(ri, TJ) that the variable before
the comma has the domain 1, , ft and the variable after the comma, the domain
1, , 02
3 This is a mere change of notations: The variables TI, r* are changed around to r s , TI.
4 This is in harmony with footnote 1 on p. 105, as it should be.
110 ZEROSUM TWOPERSON GAMES: THEORY
is unaffected, 1 and if F is strictly determined, it remains so, since this prop
erty is equivalent to A = 0. In this case v = Vi = v 2 becomes
V = Vi = ~ V2.
It is now easy to verify that all statements (14:C:a)(14:C:f) in 14.5.1.,
14.5.2. remain the same when the players 1 and 2 are interchanged.
14.7. N on strictly Determined Games
14.7.1. All this disposes completely of the strictly determined games,
but of no others. For a game I which is not strictly determined we have
A > i.e. in such a game it involves a positive advantage to "find out"
one's adversary. Hence there is an essential difference between the
results, i.e. the values in FI and in F 2 , and therefore also between the good
ways of playing these games. The considerations of 14.3.1., 14.3.3. provide
therefore no guidance for the treatment of F. Those of 14.5.1., 14.5.2.
do not apply either, since they make use of the existence of saddle points
of 3C(ri, r 2 ) and of the validity of
Max Tj Min Tj JC(7 1, r 2 ) = Min Tj Max Ti 3C(ri, r 2 ),
i.e. of F being strictly determined. There is, of course, some plausibility
in the inequality at the beginning of 14.4.2. According to this, the value v
of a play of F (for the player 1) if such a concept can be formed at all in
this generality, for which we have no evidence as yet 2 is restricted by
Vi g V g V2.
But this still leaves an interval of length A = v 2 Vi > open to v;
and, besides, the entire situation is conceptually most unsatisfactory.
One might be inclined to give up altogether: Since there is a positive
advantage in "finding out" one's opponent in such a game F, it seems
plausible to say that there is no chance to find a solution unless one makes
some definite assumption as to "who finds out whom," and to what extent. 3
We shall see in 17. that this is not so, and that in spite of A > a solu
tion can be found along the same lines as before. But we propose first,
without attacking that difficulty, to enumerate certain games F with A > 0,
and others with A = 0. The first which are not strictly determined
will be dealt with briefly now; their detailed investigation will be under
taken in 17.1. The second which are strictly determined will be ana
lyzed in considerable detail.
14.7.2. Since there exist functions 3C(ri, r 2 ) without saddle points (cf.
13.4.1., 13.4.2.; the 4>(x, y) there, is our 3C(ri, r 2 )) there exist not strictly
determined games F. It is worth while to reexamine those examples i.e.
1 This is in harmony with footnote 1 on p. 106, as it should be.
2 Cf. however, 17.8.1.
* In plainer language: A > means that it is not possible in this game for each player
simultaneously to be cleverer than his opponent. Consequently it seems desirable to
know just how clever each particular player is.
STRICTLY DETERMINED GAMES 111
the functions described by the matrices of Figs. 12, 13 on p. 94 in the
light of our present application. That is, to describe explicitly the games
to which they belong. (In each case, replace <t>(x, y) by our JC(TI, r a ), r 2
being the column number and n the row number in every matrix. Cf. also
Fig. 15 on p. 99).
Fig. 12: This is the game of "Matching Pennies." Let for TI and for
T 2 1 be "heads" and 2 be "tails," then the matrix element has the value 1
if n, r 2 "match" i.e. are equal to each other and 1, if they do not.
So player 1 "matches" player 2: He wins (one unit) if they "match" and
he loses (one unit), if they do not.
Fig. 13 : This is the game of "Stone, Paper, Scissors." Let for n and for
r 2 1 be "stone," 2 be "paper," and 3 be "scissors." The distribution of
elements 1 and 1 over the matrix expresses that "paper" defeats "stone,"
"scissors" defeat "paper," "stone" defeats "scissors." 1 Thus player 1
wins (one unit) if he defeats player 2, and he loses (one unit) if he is defeated.
Otherwise (if both players make the same choice) the game is tied.
14.7.3. These two examples show the difficulties which we encounter
in a not strictly determined game, in a particularly clear form; just because
of their extreme simplicity the difficulty is perfectly isolated here, in vitro.
The point is that in "Matching Pennies" and in "Stone, Paper, Scissors,"
any way of playing i.e. any TI or any T 2 is just as good as any other:
There is no intrinsic advantage or disadvantage in "heads" or in "tails"
per se, nor in "stone," "paper" or "scissors" per se. The only thing which
matters is to guess correctly what the adversary is going to do; but how are
we going to describe that without further assumptions about the players'
"intellects"? 2
There are, of course, more complicated games which are not strictly
determined and which are important from various more subtle, technical
viewpoints (cf. 18., 19.). But as far as the main difficulty is concerned,
the simple games of "Matching Pennies" and of "Stone, Paper, Scissors"
are perfectly characteristic.
14.8. Program of a Detailed Analysis of Strict Determinateness
14.8. While the strictly determined games T for which our solution
is valid are thus a special case only, one should not underestimate the size
of the territory they cover. The fact that we are using the normalized
form for the game F may tempt to such an underestimation: It makes things
look more elementary than they really are. One must remember that the
TI, T 2 represent strategies in the extensive form of the game, which may be
of a very complicated structure, as mentioned in 14.1.1.
In order to understand the meaning of strict determinateness, it is
therefore necessary to investigate it in relation to the extensive form of the
game. This brings up questions concerning the detailed nature of the moves,
1 " Paper covers the stone, scissors cut the paper, stone grinds the scissors."
2 As mentioned before, we shall show in 17.1. that it can be done.
112 ZEROSUM TWOPERSON GAMES: THEORY
chance or personal the state of information of the players, etc. ; i.e. we
come to the structural analysis based on the extensive form, as mentioned
in 12.1.1.
We are particularly interested in those games in which each player who
makes a personal move is perfectly informed about the outcome of the
choices of all anterior moves. These games were already mentioned in
6.4.1. and it was stated there that they are generally considered to be of a
particular rational character. We shall now establish this in a precise
sense, by proving that all such games are strictly determined. And this
will be true not only when all moves are personal, but also when chance
moves too are present.
15. Games with Perfect Information
15.1. Statement of Purpose. Induction
15.1.1. We wish to investigate the zerosum twoperson games somewhat
further, with the purpose of finding as wide a subclass among them as
possible in which only strictly determined games occur; i.e. where the
quantities
vi = Max Tj Min Tt 3C(ri, r 2 ),
v 2 = Min Tj Max Ti JC(TI, r 2 )
of 14.4.1. which turned out to be so important for the appraisal of the
game fulfill
Vi = V 2 = V.
We shall show that when perfect information prevails in F i.e. when
preliminarity is equivalent to anteriority (cf. 6.4.1. and the end of 14.8.)
then F is strictly determined. We shall also discuss the conceptual sig
nificance of this result (cf. 15.8.). Indeed, we shall obtain this as a special
case of a more general rule concerning vi, v 2 , (cf. 15.5.3.).
We begin our discussions in even greater generality, by considering a
perfectly unrestricted general nperson game T. The greater generality
will be useful in a subsequent instance.
15.1.2. Let T be a general nperson game, given in its extensive form.
We shall consider certain aspects of F, first in our original presettheoretical
terminology of 6., 7., (cf. 15.1.), and then translate everything into the par
tition and set terminology of 9., 10. (cf. 15.2., et sequ.). The reader will
probably obtain a full understanding with the help of the first discussion
alone; and the second, with its rather formalistic machinery, is only under
taken for the sake of absolute rigor, in order to show that we are really
proceeding strictly on the basis of our axioms of 10.1.1.
We consider the sequence of all moves in F: SfTli, 3T1 2 , , 3TI,,. Let
us fix our attention on the first move, 9Tli, and the situation which exists
at the moment of this move.
Since nothing is anterior to this move, nothing is preliminary to it
either; i.e. the characteristics of this move depend on nothing, they are
constants. This applies in the first place to the fact, whether 9fRi is a chance
GAMES WITH PERFECT INFORMATION 113
move or a personal move ; and in the latter case, to which player 9fTli belongs,
i.e. to the value of fci = 0, 1, , n respectively, in the sense of 6.2.1.
And it applies also to the number of alternatives a\ at 3fTCi and for a chance
move (i.e. when fci = 0) to the values of the probabilities pi(l), , p\(ct\).
The result of the choice at Sflli chance or personal is a <TI = 1, , a\.
Now a plausible step suggests itself for the mathematical analysis
of the game T, which is entirely in the spirit of the method of "complete
induction" widely used in all branches of mathematics. It replaces, if
successful, the analysis of T by the analysis of other games which contain
one move less than T. 1 This step consists in choosing a *i = 1, , cm
and denoting by T 9i a game which agrees with F in every detail except that
the move 9Tli is omitted, and instead the choice <n is dictated (by the rules
of the new game) the value ai = 9\* T 9i has, indeed, one move less than
F : Its moves are 9TC 2 , * * , 3TI,. 3 And our " inductive " step will have been
successful if we can derive the essential characteristics of F from those of
all r fi , *i = 1, , i.
15.1.3. It must be noted, however, that the possibilities of forming
F i are dependent upon a certain restriction on F. Indeed, every player
who makes a certain personal move in the game T 9i must be fully informed
about the rules of this game. Now this knowledge consists of the knowledge
of the rules of the original game F plus the value of the dictated choice at
SfTli, i.e. 9 1. Hence T 9i can be formed out of F without modifying the rules
which govern the player's state of information in F only if the outcome
of the choice at SfTli, by virtue of the original rules F, is known to every
player at any personal move of his 3TC2, , 9TI,; i.e. 9fRi must be prelim
inary to all personal moves 9dl 2 , , 311,. We restate this:
(15:A) T 9i can be formed without essentially modifying the
structure of F for that purpose only if F possess the following
property :
(15:A:a) SfTli is preliminary to all personal moves 9Tl2, * , 9Rr. 4
1 I.e. have v I instead of v. Repeated application of this "inductive" step if
feasible at all will reduce the game T to one with steps; i.e. to one of fixed, unalterable
outcome. And this means, of course, a complete solution for r. (Cf. (15:C:a) in 15.6.1.)
* E.g. F is the game of Chess, and a\ a particular opening move i.e. choice at Dili of
"white," i.e. player 1. Then r^ is again Chess, but beginning with a move of the char
acter of the second move in ordinary Chess a "black," player 2 and in the position
created by the "opening move" a\. This dictated "opening move" may, but need not,
be a conventional one (like E2E4).
The same operation is exemplified by forms of Tournament Bridge where the
"umpires" assign the players definite known and previously selected "hands."
(This is done, e.g., in Duplicate Bridge.)
In the first example, the dictated move 9TCi was originally personal (of "white,"
player 1); in the second example it was originally chance (the "deal").
In some games occasionally "handicaps" are used which amount to one or more
such operations.
* We should really use the indices 1, , v 1 and indicate the dependence on 9\\
e.g. by writing SJTlf 1 , , 3Hji,. But we prefer the simpler notation 3fRj, , STl*.
4 This is the terminology of 6.3. ; i.e. we use the special form of dependence in the
sense of 7.2.1. Using the general description of 7.2.1. we must state (15:A:a) like this:
For every personal move 31k, * 2, , r, the set * contains the function <n.
114 ZEROSUM TWOPERSON GAMES: THEORY
15.2. The Exact Condition (First Step)
16.2.1. We now translate 15.1.2., 15.1.3. into the partition and set
terminology of 9., 10., (cf. also the beginning of 15.1.2.). The notations of
10.1. will therefore be used.
di consists of the one set 12 ((10:1 :f) in 10.1.1.), and it is a subpartition
of (&i ((10:1 :a) in 10.1.1.); hence (Bi too consists of the one set ft (the others
being empty) . ** That is :
12 for precisely one fc, say k = fci,
for all fc ?^ fci.
This fci = 0, 1, , n determines the character of OTij it is the fci of 6.2.1.
If fci = 1, , n i.e. if the move is personal then &i is also a subpar
tition of a)i(fci), ((10:1 :d) in 10.1.1. This was only postulated within 5i(fci),
but Bi(ki) = 12). Hence SDi(fci) too consists of the one set 12. 3 And for
fc 7* fc], the >i(fc) which is a partition in Bi(fc) = ((10:A:g) in 10.1.1.)
must be empty.
So we have precisely one AI of Cti, which is 12, and for fci = 1, , n
precisely one Di in all SDi(fc), which is also 12; while for fci = there are no
Di in all >j(fc).
The move 3TZi consists of the choice of a Ci from Ci(fcj); by chance if
fci = 0; by the player fci if fci = 1, , n. C\ is automatically a subset
of the unique A\( = 12) in the former case, and of the unique DI(= 12)
in the latter. The number of these Ci is c*i (cf. 9.1.5., particularly footnote 2
on p. 70); and since the AI or DI in question is fixed, this a\ is a definite
constant, ai is the number of alternatives at 'JTli, the a\ of 6.2.1. and 15.1.2.
These Ci correspond to the a\ = 1, , a\ of 15.1.2., and we denote
them accordingly by Ci(l), , Ci(ai). 4 Now (10:1 :h) in 10.1.1. shows
as is readily verified that Ct 2 is also the set of the Ci(l), , Ci(ai), i.e.
equal to d.
So far our analysis has been perfectly general, valid for m\ (and to a
certain extent for ffil 2 ) of any game F. The reader should translate these
properties into everyday terminology in the sense of 8.4.2. and 10.4.2.
We pass now to iy. This should obtain from F by dictating the move
Sflli as described in 15.1.2. by putting <j\ = 9\. At the same time the
moves of the game are restricted to 9fTl2, ' , STC,. This means that the
1 This (Bi is an exception from (8:B:a) in 8.3.1.; cf. the remark concerning this (8:B:a)
in footnote 1 on p. 63, and also footnote 4 on p. 69.
8 Proof: ft belongs to Q,\, which is a subpartition of (Bij hence ft is a subset of an
element of (Bi. This element is necessarily equal to ft. All other elements of (Bi are
therefore disjunct from (cf. 8.3.1.), i.e. empty.
s Cli, S)i(fci) unlike i (cf. above) must fulfill both (8:B:a), (8:B:b) in 8.3.1.; hence
both have no further elements besides ft.
4 They represent the alternatives Cti(l), , Cli(ai) of 6.2. and 9.1.4., 9.1.5.
GAMES WITH PERFECT INFORMATION 115
element w which represents the actual play can no longer vary over all
ft, but is restricted to Ci(ffi). And the partitions enumerated in 9.2.1. are
restricted to those with * = 2, , v, 1 (and K  v + 1 for Ct).
15.2.2. We now come to the equivalent of the restriction of 15.1.3.
The possibility of carrying out the changes formulated at the end of
15.2.1. is dependent upon a certain restriction on F.
As indicated, we wish to restrict the play i.e. v within Ci(fri). There
fore all those sets which figured in the description of F and which were
subsets of Q, must be made over into subsets of C}(9\) and the partitions
into partitions within Ci(ffi) (or within subsets of Ci(frj)). How is this to
be done?
The partitions which make up the descriptions of F (cf. 9.2.1.) fall into
two classes: those which represent objective facts the Ct c , the (B = (S(0),
J3,(l), , B K (n)) and the C<(fc), k = 0, 1, , n and those which
represent only the player's state of information, 2 the ><(&), k = 1, ,n.
We assume, of course K ^ 2 (cf. the end of 15.2.1.).
In the first class of partitions we need only replace each element by
its intersection with Ci(frj). Thus (B* is modified by replacing its ele
ments B,(0), B,(l), , B K (n) by C,(*,) n B.(0), Ci(*i) n B.(l), ,
Ci(fri) n B K (ri). In Ct< even this is not necessary: It is a subpartition of
& 2 (since K *t 2, cf. 10.4.1.), i.e. of the system of pairwise disjunct sets
(Ci(l), , Ci(i)) (cf. 15.2.1.); hence we keep only those elements of a,
which are subsets of Ci(fri), i.e. that part of ft* which lies in Ci(fri). The
Q*(k) should be treated like (B, but we prefer to postpone this discussion.
In the second class of partitions i.e. for the )*(&) we cannot do any
thing like it. Replacing the elements of 3)(fc) by their intersections with
Ci(fri) would involve a modification of a player's state of information 3 and
should therefore be avoided. The only permissible procedure would be
that which was feasible in the case of Ct K : replacement of >(&) by that
part of itself which lies in Ci(fri). But this is applicable only if 33<(fc) like
& before is a subpartition of ($2 (for K ^ 2). So we must postulate this.
Now e(fc) takes care of itself: It is a subpartition of ^> K (k) ((10:1 :c)
in 10.1.1.), hence of dz (by the above assumption); and so we can replace
it by that part of itself which lies in C i(fri).
So we see: The necessary restriction of F is that every D^fc) (with
K ^ 2) must be a subpartition of ($2. Recall now the interpretation of
8.4.2. and of (10:A:d*), (10:A:g*) in 10.1.2. They give to this restriction
the meaning that every player at a personal move 9n 2 , , 9TC r is fully
1 We do not wish to change the enumeration to K = 1, , v 1, cf. footnote 3
on p. 113.
2 a represents the umpire's state of information, but this is an objective fact: the
events up to that moment have determined the course of the play precisely to that
extent (cf. 9.1.2.).
3 Namely, giving him additional information.
116 ZEROSUM TWOPERSON GAMES: THEORY
informed about the state of things after the move SfTli (i.e. before the move
9Hlj) expressed by & 2 . (Cf. also the discussion before (10:B) in 10.4.2.)
That is, 3Tli must be preliminary to all moves Sfll 2 , , 3TI,.
Thus we have again obtained the condition (15:A:a) of 15.1.3. We
leave to the reader the simple verification that the game T 9i fulfills the
requirements of 10.1.1.
15.3. The Exact Condition (Entire Induction)
15.3.1. As indicated at the end of 15.1.2., we wish to obtain the char
acteristics of T from those of all F^, fri = 1, , i, since this if suc
cessful would be a typical step of a "complete induction."
For the moment, however, the only class of games for which we possess
any kind of (mathematical) characteristics consists of the zerosum two
person games: for these we have the quantities Vi, v 2 (cf. 15.1.1.). Let us
therefore assume that F is a zerosum twoperson game.
Now we shall see that the Vi, v 2 of F can indeed be expressed with the
help of those of the r fj , 9i = 1, , i (cf. 15.1.2.). This circumstance
makes it desirable to push the "induction" further, to its conclusion:
i.e., to form in the same way I\, s , F^.^,^, , r it s 9 ,. 1 The
point is that the number of steps in these games decreases successively
from v (for F), v  1 (for F^), over v  2, v  3, to (for r fi , fi ,);
i.e. F* jf f 9 , is a "vacuous" game (like the one mentioned in the
footnote 2 on p. 76). There are no moves; the player k gets the fixed
amount $*(fri, ,#).
This is the terminology of 15.1.2., 15.1.3., i.e. of 6., 7. In that of
15.2.1., 15.2.2. i.e. of 9., 10. we would say that ft (for F) is gradually
restricted to a Ci(fri) of a 2 (for r f ), a C 2 (fri, fr 2 ) of ft 8 (for F^,^), a C 8 (fri, fr 2 , 9*)
of a 4 (for rf if f f , f ), etc., etc., and finally to a C,(*i, fr 2 , , 9,) of a,+i (for
rt ( f t 9,). And this last set has a unique element ((10:1 :g) in 10.1.1.),
say it. Hence the outcome of the game r f 9, is fixed: The player k
gets the fixed amount $*(*).
Consequently, the nature of the game F^,^ , 9y is trivially clear;
it is clear what this game's value is for every player. Therefore the process
which leads from the F^ to F if established can be used to work back
wards from T 9l .9 t 9 , to F, ir , a 9 V _ t to r fffi 9 ,_ t etc., etc., to
F, i ,, i to T 9l and finally to F.
But this is feasible only if we are able to form all games of the sequence
r V T'i'*i> *X.i t . $ , ' ' ' > r fi ,f t 9 V , i.e. if the final condition of 15.1.3.
or 15.2.2. is fulfilled for all these games. This requirement may again be
formulated for any general nperson game F; so we return now to those F.
16.3.2. The requirement then is, in the terminology of 15.1.2., 15.1.3.
(i.e. of 6., 7.) that 3Tli must be preliminary to all 9fR 2 , SflZa, , 9ffl,; that
1, , on; 1  1, , a t where a> * a f (*i); *i  1, , ai where
*t); etc., etc.
GAMES WITH PERFECT INFORMATION 117
3TC2 must be preliminary to all STls, 3TZ 4 , , 3fTC,; etc., etc.; i.e. that pre
liminarity must coincide with anteriority.
In the terminology of 15.2.1., 15.2.2. i.e. of 9., 10. of course the same
is obtained: All > K (k), K ^ 2 must be subpartitions of a 2 ; all SD(fc), * ^ 3
must be subpartitions of Ct$, etc., etc.; i.e. all 5)(fc) must be subpartitions
of a\ if K ^ X. 1 Since Ot< is a subpartition of Q,\ in any case (cf. 10.4.1.), it
suffices to require that all >(&) be subpartitions of a. However a, is a
subpartition of >(/0 within & K (k) ((10:1 :d) in 10.1.1.); consequently our
requirement is equivalent to saying that ) K (k) is that part of & K which lies in
B(fe). 2 By (10:B) in 10.4.2. this means precisely that preliminarity and
anteriority coincide in F.
By all these considerations we have established this:
(15 :B) In order to be able to form the entire sequence of games
(15:1) F, ly, rf it f f> F^,,^, ' * ' > r j, t * 9
of
v, v  1, v  2, ,
moves respectively, it is necessary and sufficient that in the game
F preliminarity and anteriority should coincide, i.e. that per
fect information should prevail. (Cf. 6.4.1. and the end of 14.8.)
If F is a zerosum twoperson game, then this permits the
elucidation of F by going through the sequence (15:1) back
wards from the trivial game F^,^, . . . , r to the significant
game F performing each step with the help of the device which
leads from the T 9i to F as will be shown in 15.6.2.
15.4. Exact Discussion of the Inductive Step
15.4.1. We now proceed to carry out the announced step from the T 9g 9
to F, the "inductive step." F need therefore fullfill only the final condition
of 15.1.3. or 15.2.2., but it must be a zerosum twoperson game.
Hence we can form all F^ , <r i = 1 , , a \ , and they also are zerosum two
person games. We denote the two players' strategies in F by Sj, , S? 1
and 2' 2 , , 2?; and the "mathematical expectation" of the outcome of
the play for the two players, if the strategies SJi, 2j are used, by
3Ci(n, rt) s OC(TI, TI), 3Ci(T,, TI) s 3C(r!, T 2 )
(cf. 11.2.3. and 14.1.1.). We denote the corresponding quantities in F, ,
by z; t/1> , S^r and s; i/2 , , zj^ 1 , and if the strategies S^ 1 ,
S^/2* are used, by
flCfj/lOlVj/lj Te^/l) s 30^(^/1, T^/j), ^Crj/aCTcj/l, T^/j) S JC^^/i, T^/).
1 We stated this above for X = 2, 3, ; for X = 1 it is automatically true: every
partition is a subpartition of Oti since Cti consists of the one set ((10:1 :f) in 10.1.1.).
1 For the motivation if one is wanted cf . the argument of footnote 3 on p. 63.
1 From now on we write <n, **, , <r* instead of 9\, ft, , S 9 because no mis
understandings will be possible.
118 ZEROSUM TWOPERSON GAMES: THEORY
We form the vi, v 2 of 14.4.1. for T and for F^ denoting them in the latter
case by v^/i, v, i/2 . So
Vi = Max T Min T 3C(ri, 7 2 ),
v 2 = Min Tf Max Ti 3C(n, r 2 ),
and
v, /2
Our aim is to express the vi, v 2 in terms of the v r /i, v^/j.
The fcj of 15.1.2., 15.2.1. which determine the character of the move 3Ej
will play an essential role. Since n = 2, its possible values are k\ = 0, 1, 2.
We must consider these three alternatives separately.
15.4.2. Consider first the case k\ = 0; i.e. let 3Tli be a chance move. The
probabilities of its alternatives <TI =!, , i are the pi(l), , PI(I)
mentioned in 15.1.2. (pi(<n) is thepi(Ci) of (10:A:h) in 10.1.1. withCi = Ci(<7i)
in 15.2.1.).
Now a strategy of player 1 in T, S^, consists obviously in specifying a
strategy of player 1 in F^, S^// for every value of the chance variable
ai = 1, , i, 1 i.e., the 2I correspond to the aggregates 21/7, , Z^ 1 //
for all possible combinations TI/I, , r ai /i.
Similarly a strategy of player 2 in F, 2J consists in specifying a strategy
of player 2 in F^, 2^/2 for every value of the chance variable <TI = 1, ,
ij i.e. the 2 2 correspond to the aggregates 2V/ 2 , ' * ' , ^a" 1 ^ for all possi
ble combinations TI/ Z , , r aj /2.
Now the " mathematical expectations" of the outcomes in F and in F ff
are connected by the obvious formula
i
JC(TI, T 2 ) = J
r t l
Therefore our formula for vi gives
Vi = Max Ti Min Tj 3C(n, ? 2 )
= Max vi ..... ',/, Min v,
r,l
The <riterm of the sum V on the extreme righthand side
1 This is clear intuitively. The reader may verify it from the formal is tic point of
view by applying the definitions of 11.1.1. and (11:A) in 11.1.3. to the situation described
in 15.2.1.
GAMES WITH PERFECT INFORMATION 119
contains only the two variables r^/i, T, i/2 . Thus the variable pairs
TI/I, Ti/ 2 ; ; T ai /i, r ttj / 2
occur separately, in the separate oiterms
Hence in forming the Min Vj Ta ,, we can minimize each <riterm sep
arately, and in forming the Max Vi r<x /v we can again maximize each
<riterm separately. Accordingly, our expression becomes
<*t <*i
pi(cri) Max T<Vi Min rvt JC ri (T r/ i, r V2 ) = pi(n)v ri /i.
Thus we have shown
(15:2) Vi = piOrOv^/i.
If the positions of Max and Min are interchanged, then literally the
same argument yields
(15:3) v 2 = pi(cri)v <rj/2 .
15.4.3. The case of fci = 1 comes next, and in this case we shall have
to make use of the result of 13.5.3. Considering the highly formal character
of this result, it seems desirable to bring it somewhat nearer to the reader's
imagination by showing that it is the formal statement of an intuitively
plausible fact concerning games. This will also make it clearer why this
result must play a role at this particular juncture.
The interpretation which we are now going to give to the result of 13.5.3.
is based on our considerations of 14.2.14.5. particularly those of 14.5.1.,
14.5.2. and for this reason we could not propose it in 13.5.3.
For this purpose we shall consider a zerosum twoperson game F in its
normalized form (cf. 14.1.1.) and also its minorant and majorant games Fi,
F 2 (cf. 14.2.).
If we decided to treat the normalized form of F as if it were an extensive
form, and introduced strategies etc. with the aim of reaching a (new)
normalized form by the procedure of 11.2.2., 11.2.3. then nothing would
happen, as described in 11.3. and particularly in footnote 1 on p. 84. The
situation is different, however, for the majorant and minorant games FI, Fa;
these are not given in the normalized form, as mentioned in footnotes 2 and 3
on p. 100. Consequently it is appropriate and necessary to bring them into
their normalized forms which we are yet to find by the procedure of
11.2.2., 11.2.3.
120 ZEROSUM TWOPERSON GAMES: THEORY
Since complete solutions of FI, F 2 were found in 14.3.1., 14.3.3., it is to
be expected that they will turn out to be strictly determined. 1
It suffices to consider FI (cf. the beginning of 14.3.4.), and this we now
proceed to do.
We use the notations TI, r 2 , 3C(n, r 2 ) and vi, v 2 for F and we denote the
corresponding concepts for FI by r(, r 2 , 3C'( r i> r i) and vi, v 2 .
A strategy of player 1 in FI consists in specifying a (fixed) value
TI(= 1, , 0i) while a strategy of plaj r er 2 in FI consists in specifying a
value of r 2 (= 1, , 2) depending on TI for every value of TI( = !,,
0i). 2 So it is a function of TI: TI = 3 2 (ri).
Thus T'J is TI, while T Z corresponds to the functions 3 2 , andSC'^i, rj) to
3C(ri, 3 2 (ri)). Accordingly
Tj Min 3 3C(Ti, 3 2 (rj)),
v a = Min 3f Max T 3C(Ti, 3 2 (ri)).
Hence the assertion that FI is strictly determined, i.e. the validity of
vi = vi coincides precisely with (13 :E) in 13.5.3. ; there we need only replace
the X, U, /(*), ifr(x, f(x)) by our n, T 2 , 3 2 (r,), JC(TI, 3 2 (ri)).
This equivalence of the result of 13.5.3. to the strictly determined
character of Fi makes intelligible why 13.5.3. will play an essential role
in the discussion which follows. FI is a very simple example of a game in
which perfect information prevails, and these are the games which are the
ultimate goal of our present discussions (cf. the end of 15.3.2.). And the
first move in Fi is precisely of the kind which is coming up for discussion
now: It is a personal move of player 1, i.e. ki = 1.
15.5. Exact Discussion of the Inductive Step (Continuation)
15.6.1. Consider now the case fci = 1; i.e. let 3Tli be a personal move
of the player 1.
A strategy of player 1 in F, 2{i consists obviously in specifying a (fixed)
value <rj( = 1, , on) and a (fixed) strategy of player 1 in F,, Zj 1 /^ 1 8 ; i.e.
the Z5> correspond to the pairs <rj, T^/I.
1 This is merely a heuristic argument, since the principles on which the "solutions"
of 14.3.1., 14.3.3. are based are not entirely the same as those by which we disposed of the
strictly determined case in 14.5.1., 14.5.2., although the former principles were a stepping
stone to the latter. It is true that the argument could be made pretty convincing by an
"unmathematical," purely verbal, amplification. We prefer to settle the matter
mathematically, the reasons being the same as given in a similar situation in 14.3.2.
* This is clear intuitively. The reader may verify it from the formalistic point of
view, by reformulating the definition of r } as given in 14.2. in the partition and set
terminology, and then applying the definitions of 11.1.1. and (11:A) in 11.1.3.
The essential fact is, at any rate, that in Ti the personal move of player 1 is pre
liminary to that of player 2.
1 Cf. footnote 1 on p. 118 or footnote 2 above.
GAMES WITH PERFECT INFORMATION 121
A strategy of player 2 in F, 2^, on the other hand, consists in specifying
a strategy of player 2 in IV, 2^' /2 f , for every value of the variable <rj =
1, , ai. 1 So r^/2 is a function of <r J : r^ /2 = 3s(<7j); i.e. the S5 correspond
to the functions 3 2 and clearly
Therefore our formula for Vi gives:
vi = Max,;. TrJ/1 Min^OC^r,;,!, 3 2 (or))
= Max r<r?/i Max,j Min^JC^j/i, 3 2 (<rJ)).
Now
Max,,; Min^JC^Ovj/i, 3 s (<rJ)) = Max,; Min^SC,;^,;/!, r,;/i)
owing to (13 :G) in 13.5.3.; there we need only replace the x, u, f(x), \l/(x, u)
by our erj, r,; /2 , 3 2 (cr?), 3C,;(T,j/i, r ff j/ 2 ). 2 Consequently
vi = Max T<rJ/l Max ff ; Min Tr , /t JC^r,;/!, T,; /t )
= Max^j Max T<r0/i Min v/t JC r j(r r j/i, r c j/t)
= Max,; v r j/i
And our formula for v 2 gives: 8
v 2 = Min 3j Max^o/^ajCraJ/i,
= Min 3j Max,; Max rr;/1 JC,;(r r j/
Now
Min 3l Max^j Max rr!/I JC^j/i, 3 t (aJ))
= Max,j Min 3j Max r<r;/l ^jKj/i, 3 2 (<r5))
= Max a j Min Tff;/1 Max^o/.X^^j/i, T r j/,)
owing to (13 :E) and (13 :G) in 13.5.3.; there we need only replace the x, u,
f(x), \fr(x, u) by our a\, r,;/ 2 , 3 2 (<r?), Max v/i ac r j(r r j/i,r^/). 4 Consequently
v 2 =
Summing up (and writing <n instead of crj) :
1 Cf. footnote 1 on p. 118 or footnote 2 on p. 120.
1 TrJ/i must be treated in this case as a constant.
This step is of course a rather trivial one, cf . the argument loc. cit.
8 In contrast to 15.4.2., there is now an essential difference between the treatments of
vi and vj.
4 rrj/i is killed in this case by the operation Maxr a0/1 .
This step is not trivial. It makes use of (13:E) in 13.5.3., i.e. of the essential result
of that paragraph, as stated in 15.4.3.
122 ZEROSUM TWOPERSON GAMES: THEORY
(15:4) vi = Max,, v^/i,
(15:5) v 2 = Max^ v, i/2 .
16.6.2. Consider finally the case fci = 2; i.e. let SfTli be a personal move of
player 2.
Interchanging players 1 and 2 carries this into the preceding case (fci = 1).
As discussed in 14.6., this interchange replaces Vi, v 2 by v 2 , Vi
and hence equally v^/i, v, i/2 by v^/j, v^/i. Substituting these changes
into the above formulae (15:4), (15:5), it becomes clear that these formulae
must be modified only by replacing Max in them by Min. So we have:
(15:6) vi = Min^ v, t/ i,
(15:7) v 2 = Min < r i v ri /i.
16.6.3. We may sum up the formulae (16:2)(15:7) of 15.4.2., 15.5.1.,
15.5.2., as follows:
For all functions /(<TI) of the variable <TI(= 1, , i) define three
operations Mjj, fci = 0, 1, 2 as follows:
for fci = 0,
(15:8)
for
for fci = 2.
Then
v* = M k \\ 9i/k for fc = 1, 2.
We wish to emphasize some simple facts concerning these operations
M".\.
First, M*\ kills the variable a\ ; i.e. M 9 \f(ai) no longer depends on <n.
For fci = 1, 2 i.e. for Max^, Min^ this was pointed out in 13.2.3. For
fci = it is obvious; and this operation is, by the way, analogous to the
integral used as an illustration in footnote 2 on p. 91.
Second, M k \ depends explicitly on the game F. This is evident since
fci occurs in it and <r\ has the range 1, , a\. But a further dependence
is due to the use of the pi(l), , PI(I), in the case of fci = 0.
Third, the dependence of v* on v ff /k is the same for fc = 1, 2 for each
value of fci.
We conclude by observing that it would have been easy to make these
formulae involving the average ] PI(^I)/(^I) for a chance move, the
maximum for a personal move of the first player, and the minimum for
one of his opponent plausible by a purely verbal (unmathematical)
argument. It seemed nevertheless necessary to give an exact mathematical
treatment in order to do full justice to the precise position of Vi and of v 2 .
A purely verbal argument attempting this would unavoidably become so
involved if not obscure as to be of little value.
GAMES WITH PERFECT INFORMATION 123
16.6. The Result in the Case of Perfect Information
15.6.1. We return now to the situation described at the end of 15.3.2.
and make all the hypotheses mentioned there; i.e. we assume that perfect
information prevails in the game F and also that it is a zerosum twoperson
game. The scheme indicated loc. cit., together with the formula (15:8) of
15.5.3. which takes care of the "inductive" step, enable us to determine the
essential properties of F.
We prove first without going any further into details that such a F
is always strictly determined. We do this by " complete induction " with
respect to the length v of the game (cf. 15.1.2.). This consists of proving
two things :
(15:C:a) That this is true for all games of minimum length; i.e. for
v = 0.
(15:C:b) That if it is true for all games of length v 1, for a given
v = 1, 2, , then it is also true for all games of length v.
Proof of (15:C:a) : If the length v is zero, then the game has no moves at
all; it consists of paying fixed amounts to the players 1, 2, say the amounts
w, w. 1 Hence 0i = 2 = 1 so TI = T Z = 1, 3C(n, r 2 ) = w, 2 and so
vi = v 2 = w;
i.e. F is strictly determined, and its v = w. 3
Proof of (15:C:b): Let F be of length v. Then every F, t is of length
v 1; hence by assumption every F^ is strictly determined. Therefore
v 9i/ i ss v^/2. Now the formula (15:8) of 15.5.3. shows 4 that Vi = v t .
Hence F is also strictly determined and the proof is completed.
15.6.2. We shall now go more into detail and determine the Vi = V2 = v
of F explicitly. For this we do not even need the above result of 15.6.1.
We form, as at the end of 15.3.2., the sequence of games
(15:9) r, r v r, rv  ,r, it , ...... ,/
of the respective lengths
v, v  1, v  2, , 0.
Denote the Vi, v 2 of these games by
1 Cf. the game in footnote 2 on p. 76 or F^,^ ..... i v in 15.3.1. In the partition and
set terminology: For v = (10:1 :f), (10:1 :g) in 10.1.1. show that 12 has only one ele
ment, say TT : ft = (TT). So w = $I(TT), w = 5i(ir) play the role indicated above.
8 I.e. each player has only one strategy, which consists of doing nothing.
* This is, of course, rather obvious. The essential step is U5:C:b).
4 I.e. the fact mentioned at the end of 15.5.3., that the formula is the same for A; = 1,2
for each value of k\.
5 Cf. footnote 3 on p. 117.
124 ZEROSUM TWOPERSON GAMES: THEORY
Let us apply (15:8) of 15.5.3. for the " inductive " step described at
the end qf 15.3.2.; i.e. let us replace the <TI, T, T^of 15.5.3. by<r, r, t ^_ if
T 9i , K _ itt , K for each * = !,, v. The ki of 15.5.3. then refers
to the first move of T 0i OK j i.e. to the move 311* in F. It is therefore
convenient to denote it by fc(*i, * * ' , ow). (Cf. 7.2.1.) Accordingly
we form the operation M k * K (ffl '" 1 ', replacing the M k \ of 15.5.3. In this
way we obtain
(15:10) v., w * = M*,f " ' ' ' ' ^ v, t , K/ * for fc = 1, 2.
Consider now the last element of the sequence (15:9), the game F^ v
This falls under the discussion of (15:C:a) in 15.6.1.; it has no moves at all.
Denote its unique play 1 by if = ff(<ri, , <? v ). Hence its fixed w 2 is
equal to (Fi(((ri, , <r,)). So we have:
(15:11) v^ % /i = v^ ,,/t = ffi(ir(<ri, , <r,)).
Now apply (15:10) with K = v to (15:11) and then to the result, with
K =s v l t  ,2, 1 successively. In this manner
(15:12) v, = v 2 = v = M k 9 \M k ^ M** 9 * "'*Si(*(ffi, , O).
obtains.
This proves once more that T is strictly determined, and also gives an
explicit formula for its value.
15.7. Application to Chess
15.7.1. The allusions of 6.4.1. and the assertions of 14.8. concerning
those zerosum twoperson games in which preliminarity and anteriority
coincide i.e. where perfect information prevails are now established. We
referred there to the general opinion that these games are of a particularly
rational character, and we have now given this vague view a precise meaning
by showing that the games In question are strictly determined. And we
have also shown a fact much less founded on any " general opinion"
that this is also true when the game contains chance moves.
Examples of games with perfect information were already given in 6.4.1. :
Chess (without chance moves) and Backgammon (with chance moves).
Thus we have established for all these games the existence of a definite
value (of a play) and of definite best strategies. But we have established
their existence only in the abstract, while our method for their construction
is in most cases too lengthy for effective use. 3
In this connection it is worth while to consider Chess in a little more
detail.
1 Cf. the remarks concerning r^ ^ in 15.3.1.
1 Cf. (15:C:a) in 15.6.1., particularly footnote 1 on p. 123.
8 This is due mainly to the enormous value of v. For Chess, cf . the pertinent part of
footnote 3 on p. 59. (The v* there is our K, cf. the end of 7.2.3.)
GAMES WITH PERFECT INFORMATION 125
The outcome of a play in Chess i.e. every value of the functions SF* of
6.2.2. or 9.2.4. is restricted to the numbers 1,0, I. 1 Thus the functions
9* of 11.2.2. have the same values, and since there are no chance moves in
Chess, the same is true for the function OC* of 11.2.3. 2 In what follows we
shall use the f unction 3C = 3Ci of 14.1.1.
Sincere has only the values, 1,0, 1, the number
(15:13) v = Max Ti Mm Tj 3C(n, r 2 ) = Min Tj Max Tj 3C(r,, r 2 )
has necessarily one of these values
v 1,0, 1.
We leave to the reader the simple discussion that (15:13) means this:
(15:D:a) If v = 1 then player 1 (" white ") possesses a strategy with
which he "wins," irrespective of what player 2 ("black")
does.
(15:D:b) If v = then both players possess a strategy with which
each one can "tie" (and possibly "win"), irrespective of
what the other player does.
(15:D:c) If v = 1 then player 2 ("black") possesses a strategy
with which he "wins," irrespective of what player 1 ("white")
does. 8
15.7.2. This shows that if the theory of Chess were really fully known
there would be nothing left to play. The theory would show which of the
three possibilities (15:D:a), (15:D:b), (15:D:c) actually holds, and accord
ingly the play would be decided before it starts: The decision would be in
case (15:D:a) for "white," in case (15:D:b) for a "tie," in case (15:D:c)
for "black."
But our proof, which guarantees the validity of one (and only one)
of these three alternatives, gives no practically usable method to determine
the true one. This relative, human difficulty necessitates the use of those
incomplete, heuristic methods of playing, which constitute "good" Chess;
and without it there would be no element of "struggle" and "surprise" in
that game.
1 This is the simplest way to interpret a "win," "tie," or "loss" of a play by the
player k.
* Every value of 9* is one of ff *; every value of 3C* in the absence of chance moves is
one of 9*> ^ l c  cit. If there were chance moves, then the value of 3C* would be the
probability of a "win" minus that of a "loss," i.e. a number which may lie anywhere
between 1 and 1.
8 When there are chance moves, then 3C(ri, r 2 ) is the excess probability of a "win"
over a "loss," cf. footnote 2 above. The players try to maximize or to minimize this
number, and the sharp trichotomy of (15:D:a) (15:D:c) above does not, in general,
obtain.
Although Backgammon is a game in which complete information prevails, and which
contains chance moves, it is not a good example for the above possibility; Backgammon
is played for varying payments, and not for simple "win," "tie" or "loss," i.e. the
values of the $* are not restricted to the numbers 1,0, 1.
126 ZEROSUM TWOPERSON GAMES: THEORY
15.8. The Alternative, Verbal Discussion
16.8.1. We conclude this chapter by an alternative, simpler, less forma
istic approach to our main result, that all zerosum twoperson games, i
which perfect information prevails, are strictly determined.
It can be questioned whether the argumentation which follows is reall
a proof; i.e., we prefer to formulate it as a plausibility argument by which
value can be ascribed to each play of any game F of the above type, but thi
is still open to criticism. It is not necessary to show in detail how thos
criticisms can be invalidated, since we obtain the same value v of a play c
T as in 15.4.15.6., and there we gave an absolutely rigorous proc
using precisely defined concepts. The value of the present plausibilit
argument is that it is easier to grasp and that it may be repeated for othe
games, in which perfect information prevails, which are not subject to th
zerosum twoperson restriction. The point which we wish to bring on
is that the same criticisms apply in the general case too, and that they ca
no longer be invalidated there. Indeed, the solution there will be foun
(even in games where perfect information prevails) along entirely differen
lines. This will make clearer the nature of the difference between th
zerosum twoperson case and the general case. That will be rathe
important for the justification of the fundamentally different method
which will* have to be used for the treatment of the general cas
(cf. 24.).
16.8.2. Consider a zerosum twoperson game F in which perfect informa
tion prevails. We use the notations of 15.6.2. in all respects: For th
STCi, 3TC 2 , , 311,; the <n, cr 2 , , a,] the fci, fc 2 (<n), , k,(<n, **, , <r,_0
the probabilities; the operators M h .\, Af*; (<ri> , , M*?' 1 ' 9 ' ..... ''''j th
sequence (15:9) of games derived from F; and the function &I(*(<TI,   ,0,))
We proceed to discuss the game F by starting with the last move 3M
and then going backward from there through the moves 9fTC,_i, 3TC,_ 2 ,
Assume first that the choices <TI, o 2 , , <r \ (of the moves 9Tli, 3fE 2 , '
3fll,_i) have already been made, and that the choice <r, (of the move 9TC,) i
now to be made.
If 9TC, is a chance move, i.e. if k,(<n, 0 2 , , <r,_i) = 0, then v v wil
have the values 1, 2, , a,,(<ri, , v v \) with the respective probabili
ties p,(l), p*(2), ,p 9 (cL 9 ((r\, , o,_i)). So the mathematical expec
tation of the final payment (for the player 1) 9 r i(*(oi, , <r,_i, <7,)) i
*
If 3TI, is a personal move of players 1 or 2, i.e. if k 9 (<r\, , <r v \) =
or 2, then that player can be expected to maximize or to minimize
$i(*(<n, * * * , <r,i, <r*)) by his choice of <r,; i.e. the outcome
Max,, 2Fi(lf(<ri, , <r,i, <r,)) or Min^ $i(ff(<ri, , <r,_i, <r,)), respec
tively is to be expected.
GAMES WITH PERFECT INFORMATION 127
I.e., the outcome to be expected for the play after the choices <TI,
v \ have been made is at any rate
Assume next that only the choices <n, , a r _2 (of the moves
, 9TC,_i) have been made and that the choice <r,_i (of the move 9K,_i)
is now to be made.
Since a definite choice of <r,_i entails, as we have seen, the outcome
Afjj (r ..... *'~ l) &i(*(<ri, . , 0v)) which is a function of <n, , a,i
only, since the operation Mj v (ffl ..... '"** kills cr, we can proceed as above.
We need only replace F; <n, ,*,; M k .*' 1 ..... '" l} friOKn, , or,)) by
v 1; en, , a,_ i; Mj; : ; ( '> ..... "'> Atf'/' ..... "> *,(*(, , a,)).
Consequently the outcome to be expected for the play after the choices
<n, * * , cr,_ 2 have been made is
Similarly the outcome to be expected for the play after the choices
<n, , <r,_s have been made is
Finally, the outcome to be expected for the play outright before it
has begun is
And this is precisely the v of (15:12) in 15.6.2. 1
15.8.3. The objection against the procedure of 15.8.2. is that this
approach to the "value" of a play of T presupposes "rational" behavior
of all players; i.e. player 1's strategy is based upon the assumption that
player 2's strategy is optimal and viceversa.
Specifically: Assume fc,_i(<n, , <r,_ 2 ) = 1, k,(*i, , <r,_i) = 2.
Then player 1, whose personal move is 9H v _i chooses his <r,_i in the convic
tion that player 2, whose personal move is 9fR, chooses his <r, "rationally."
Indeed, this is his sole excuse for assuming that his choice of <r v \ entails
the outcome Min,,SiOr(en, ,O), i.e. M^' 1 ..... '"'^i^^i, ,*,)),
of the play. (Cf. the discussion of 3TC,_i in 15.8.2.)
1 In imagining the application of this procedure to any specific game it must be remem
bered that we assume the length v of T to be fixed. If v is actually variable and it is
so in most games (cf. footnote 3 on p. 58) then we must first make it constant, by
the device of adding "dummy moves" to T as described at the end of 7.2.3. It is only
after this has been done that the above regression through 3TC,, 9M*i, , 9Ri becomes
feasible.
For practical construction this procedure is of course no better than that of 15.4.
15.6.
Possibly some very simple games, like Tittattoe, could be effectively treated in
either manner.
128 ZEROSUM TWOPERSON GAMES: THEORY
Now in the second part of 4.1.2. we came to the conclusion that the
hypothesis of " rationality " in others must be avoided. The argumentation
of 15.8.2. did not meet this requirement.
It is possible to argue that in a zerosum twoperson game the rationality
of the opponent can be assumed, because the irrationality of his opponent
can never harm a player. Indeed, since there are only two players and
since the sum is zero, every loss which the opponent irrationally inflicts
upon himself, necessarily causes an equal gain to the other player. 1 As it
stands, this argument is far from complete, but it could be elaborated con
siderably. However, we do not need to be concerned with its stringency:
We have the proof of 15.4. 15.6. which is not open to these criticisms. 2
But the above discussion is probably nevertheless significant for an
essential aspect of this matter. We shall see how it affects the modified
conditions in the more general case not subject to the zerosum twoperson
restriction referred to at the end of 15.8.1.
16. Linearity and Convexity
16.1. Geometrical Background
16.1.1. The task which confronts us next is that of finding a solution
which comprises all zerosum twoperson games, i.e. which meets the
difficulties of the nonstrictly determined case. We shall succeed in doing
this with the help of the same ideas with which we mastered the strictly
determined case : It will appear that they can be extended so as to cover all
zerosum twoperson games. In order to do this we shall have to make
use of certain possibilities of probability theory (cf. 17.1., 17.2.). And
it will be necessary to use some mathematical devices which are not quite
the usual ones. Our analysis of 13. provides one part of the tools; for the
remainder it will be most convenient to fall back on the mathematico
geometrical theory of linearity and convexity. Two theorems on convex
bodies 3 will be particularly significant.
For these reasons we are now going to discuss to the extent to which
they are needed the concepts of linearity and convexity.
16.1.2. It is not necessary for us to analyze in a fundamental way the
notion of ndimensional linear (Euclidean) space. All we need to say is
that this space is described by n numerical coordinates. Accordingly we
define for each n = 1, 2, , the ndimensional linear space L n as the
set of all nuplets of real numbers [x\,   ,x n }. These nuplets can also
be looked upon as functions i of the variable i, with the domain (1, , n)
1 This is not necessarily true if the sum is not constantly zero, or if there are more
than two players. For details cf. 20.1., 24.2.2., 58.3.
Cf. in this respect particularly (14:D:a), (14:D:b), (14:C:d), (14:C:e) in 14.5.1.
and (14:C:a), (H:C:b) in 14.5.2.
1 Cf. T. Bonessen and W. Fenchel: Theorie der konvexen Korper, in Ergebnisse der
Mathematik und ihrer Grenzgebiete, Vol. III/l, Berlin 1934. Further investigations in
H. Weyl: Elementare Theorie der konvexen Polyeder. Commentarii Mathematici JGEelve
tici, Vol. VII, 1935, pp. 290306.
LINEARITY AND CONVEXITY 129
in the sense of 13.1.2., 13.1.3. 1 We shall in conformity with general
usage call i an index and not a variable; but this does not alter the nature
of the case. In particular we have
{xi, , x n } = jtfi, , y n ]
if and only if x = yi for all i = 1, , n (cf. the end of 13.1.3.). One
could even take the view that L n is the simplest possible space of (numerical)
functions, where the domain is a fixed finite set the set (1, , n). 2
We shall also call these nuplets or functions of L n points or vectors of
L n and write
(16:1) "? = {xi,   ,x*\.
The Xi for the specific i = 1, , n the values of the function x are
the components of the vector x.
16.1.3. We mention although this is not essential for our further work
that L n is not an abstract Euclidean space, but one in which a frame of
reference (system of coordinates) has already been chosen. 3 This is due
to the possibility of specifying the origin and the coordinate vectors of L n
numerically (cf. below) but we do not propose to dwell upon this aspect
of the matter.
The zero vector or origin of L n is
0*= {0,   ,0}.
The n coordinate vectors of L n are the
7= {0, ,1, ,0) = {!,, ,6 n ,l j = 1, , n,
where
for i = j, 4  5
for i ? j.
After these preliminaries we can now describe the fundamental oper
ations and properties of vectors in L n .
16.2. Vector Operations
16.2.1. The main operations involving vectors are those of scalar
>
multiplication, i.e. the multiplication of a vector x by a number t, and of
1 I.e. the rMiplets [x\, , x n \ are not merely sets in the sense of 8.2.1. The
effective enumeration of the x, by means of the index i = 1, , n is just as essential
as the aggregate of their values. Cf . the similar situation in footnote 4 on p. 69.
1 Much in modern analysis tends to corroborate this attitude.
8 This at least is the orthodox geometrical standpoint.
4 Thus the zero vector has all components 0, while the coordinate vectors have all
components but one that one component being 1, and its index j for thejth coordinate
vector.
6 in is the "symbol of Kronecker and Weierstrass," which is quite useful in many
respects.
130 ZEROSUM TWOPERSON GAMES: THEORY
vector addition, i.e. addition of two vectors. The two operations are defined
by the corresponding operations, i.e. multiplication and addition, on the
components of the vector in question. More precisely:
Scalar multiplication: t{x\,   , x n } = [txi, , tx n }.
Vector addition :
{zi, , x n } + [yi,  , y n ] = [xi + t/i, , x n + y n \.
The algebra of these operations is so simple and obvious that we forego
its discussion. We note, however, that they permit the expression of any
vector x = [x\,   , x n ] with the help of its components and the coordi
nate vectors of L n
Some important subsets of L n :
(16:A:a) Consider a (linear, inhomogeneous) equation
n
(16:2:a) a > x > = l
ii
(ai, , a n , b are constants). We exclude
ai = = a =
since in that case there would be no equation at all. All
points (vectors) x = {zi, , x n \ which fulfill this equation,
form a hyperplane. 2
(16:A:b) Given a hyperplane
n
(16:2:a) X a '*' = b >
i
it defines two parts of L n . It cuts L n into these two parts:
n
(16:2:b) J) OM > 6,
ti
and
(16:2:c) ^ a l x i < b.
ti
These are the two halfspaces produced by the hyperplane.
> n
1 The x, are numbers, and hence they act mx } 8 'as scalar multipliers. 2^ is a vector
yi
summation.
* For n 3, i.e. in ordinary (3dimensional Euclidean) space, these are just the
ordinary (2dimensional) planes. In our general case they are the ((n 1 )dimensional)
analogues; hence the name.
LINEARITY AND CONVEXITY
131
Observe that if we replace a i, , a n , b by ai, , a n , 6, then
the hyperplane (16:2 :a) remains unaffected, but the two halfspaces (16:2:b),
(16:2:c) are interchanged. Hence we may always assume a half space to
be given in the form (16:2:b).
> >
(16:A:c) Given two points (vectors) x , y andaJ ^ Owithl t ^ 0;
then the center of gravity of x, y with the respective weights
I, 1 t in the sense of mechanics is / x + (1  t) y.
The equations
>
X = {Xl, , X n ), y = (2/1, ' , 2/n),
*7 + (1  07 = Itei + (1  Ol/i,   , ten + (1  t)y n ]
should make this amply clear.
A subset, C, of L n which contains all centers of gravity of all its points
i.e. which contains with x, y all x + (1 f) y , Q t 1 is convex.
The reader will note that for n = 2,
3 i.e. in the ordinary plane or space
this is the customary concept of con
vexity. Indeed, the set of all points
t x + (1 y , ^ t ^ 1 is precisely
the linear (straight) interval connect
ing the points x and y , the interval
[ x , y]. And so a convex set is one
which, with any two of its points x , x interval i* i
> Figure 16.
y , also contains their interval [ x , y ].
Figure 16 shows the conditions for n = 2, i.e. in the plane.
16.2.2. Clearly the intersection of any number of convex sets is again
> >
convex. Hence if any number of points (vectors) x ', , x p is given,
there exists a smallest convex set containing them all: the intersection of
all convex sets which contain x ', , x p . This is the convex set .spanned
>
by x ',, x p . It is again useful to visualize the case n = 2 (plane).
Cf. Fig. 17, where p = 6. It is easy to verify that this set consists of all
points (vectors)
(16:2:d)
(/ *''f orally ^ 0, , t p ^ with
/, = 1.
Proof: The points (16:2:d) form a set containing all x ',
x ' is such a point: put tj = 1 and all other ti 0.
132 ZEROSUM TWOPERSON GAMES: THEORY
> * >
The points (16:2 :d) form a convex set : If x = 2} J/ * ; and y =
p
then * z + (1  y = t*v x ' with w/ = **,+ (1  O*/
yi
Any convex set, D, containing a; ',, z p contains also all "points
of (16:2:d): We prove this by induction for all p = 1, 2, .
Proof: For p = 1 it is obvious; since then ti = 1 and so x ' is the only
point of (16:2:d).
PI
Assume that it is true for p 1. Consider p itself. If 2) <; = then
jj = = < p _ 1 = 0, the point of (16:2 :d) is x p and thus belongs to D. If
Shaded area: Convex spanned by 7,....,?
Figure 17.
pi
PI pl p
2 *, > 0, then put * = 5) *,,sol J^JJ^S^ 5 " 8 ^. HenceO < ^ 1.
yi yi yi yi
PI
Put 5, = < 7 /< for j = 1, , p 1. So 2) Sj ? = 1. Hence, by our
yi
PI _
assumption for p  1, % 8jX > is in D. D is convex, hence
yi
yi
is also in D; but this vector is equal to
which thus belongs to Z>.
5;
yi
LINEARITY AND CONVEXITY
133
The proof is therefore completed.
The h, , t p of (16:2:d) may themselves be viewed as the com
ponents of a vector t = {t\, , t p ] inL p . It is therefore appropriate
to give a name to the set to which they are restricted, defined by
ti ^ 0,   , t p 0,
and
r,txit
 1.
Ztaxis
Figure 18.
n ?
Figure 19.
Figure 20. Figure 21.
We shall denote it by S p . It is also convenient to give a name to the set
which is described by the first line of conditions above alone, i.e. by t\ ^ 0,
* , tp ^ 0. We shall denote it by P p . Both sets S p , P P are convex.
Let us picture the cases p = 2 (plane) and p = 3 (space). P 2 is the
positive quadrant, the area between the positive x\ and x* axes (Figure 18).
P 8 is the positive octant, the space between the positive x\, x* and x 8 axes,
i.e. between the plane quadrants limited by the pairs x\, # 2 ; x\, x 8 ; x 2 , x 8 of
these (Fig. 19). /S 2 is a linear interval crossing P 2 (Figure 18). 5 8 is a plane
triangle, likewise crossing P 8 (Figure 19). It is useful to draw Si, Si sep
134 ZEROSUM TWOPERSON GAMES: THEORY
arately, without the Pa, P 8 (or even the La, L 8 ) into which they are naturally
immersed (Figures 20, 21). We have indicated on these figures those dis
tances which are proportional to Xi, Xa or x\, x 2 , x 8 , respectively.
(We reemphasize: The distances marked x\, x 2 , x 8 in Figures 20, 21 are
not the coordinates x\, xa, x 8 themselves. These lie in La or L 8 outside of
Sa or S 8 , and therefore cannot be pictured in Sz or 83 } but they are easily
seen to be proportional to those coordinates.)
16.2.3. Another important notion is the length of a vector. The length
of x = {xi, , x n } is
1*1 = A
' ti
The distance of two points (vectors) is the length of their difference :
Thus the length of x is the distance from the origin . l
16.3. The Theorem of the Supporting Hyperplanes
16.3. We shall now establish an important general property of convex
sets:
(16:B) Let p vectors x l ,    , x p be given. Then a vector y
>
either belongs to the convex C spanned by x ',, x p (cf.
(16:A:c) in 16.2.1.), or there exists a hyperplane which contains
y (cf. (16:2:a) in 16.2.1.) such that all of C is contained in
one halfspace produced by that hyperplane (say (16:2:b) in
16.2.1. ;cf. (16:A:b) id.).
This is true even if the convex spanned by x ', , x p is replaced by
any convex set. In this form it is a fundamental tool in the modern theory
of convex sets.
A picture in the case n = 2 (plane) follows: Figure 22 uses the convex
set C of Figure 17 (which is spanned by a finite number of points, as in
the assertion above), while Figure 23 shows a general convex set C. 2
Before proving (16 :B), we observe that the second alternative clearly
excludes the first, since y belongs to the hyperplane, hence not to the half
space. (I.e. it fulfills (16:2:a) and not (16:2:b) in (16:A:b) above.)
We now give the proof:
Proof: Assume that y does not belong to C. Then consider a point of
C which lies as near to y as possible, i.e. for which
1 The Euclidean Pythagorean meaning of these notions is immediate.
*For the reader who is familiar with topology, we add: To be exact, this sentence
should be qualified the statement is meant for closed convex sets. This guarantees
the existence of the minimum that we use in the proof that follows. Regarding these
LINEARITY AND CONVEXITY
7  7l 2 = (  ViY
135
assumes its minimum value.
The hyperplane
tl
The half space
Figure 22.
The hyperplane
The half space
Figure 23.
Consider any other points u of C. Then for every t with ^ t ^ 1,
tu + (I t) z also belongs to the convex C. By virtue of the minimum
property of z (cf. above) this necessitates
i.e.
i.e.
 y)+t(u 
By elementary algebra this means
2 <*
^ 0.
tl
136 ZEROSUM TWOPERSON GAMES: THEORY
So f or t > (but of course t ^ 1) even
2 5) (ft ~ Vi)(Ui  ft) + t (Ui  z<)*t * 0.
ti ti
n
If t converges to 0, then the lefthand side converges to 2 J) (ft ~~ &
Hence
(16:3) (*  y,)(t*<  ft) ^ 0.
As w, yi = (w t  Zi) + (ft ~ 2/.), this means
ft).
 i 2 .
t'l
t1
Now 2; 7^ y (as z belongs to C, but y does not); hence \ z y  2 > 0.
So the lefthand side above is > 0. I.e.
(16:4)
(ft  t/)2/.
i
Put a< = y, then ai
Figure 24.
= = a n = is excluded by z j* y (cf.
n
above). Put also b = a^,. Thus
(16:2 *)
defines a hyperplane, to which y clearly be
1 ;s. Next
<z, >
(16:2:b*)
is a half space produced by this hyperplane, and (16:4) states precisely
that u belongs to this half space.
Since u was an arbitrary element of C this completes the proof.
This algebraic proof can also be stated in the geometrical language.
Let us do this for the case n = 2 (plane) first. The situation is pictured
in Figure 24: z is a point of C which is as near to the given point y as possible;
i.e. for which the distance of y and z ,  z y\ assumes its minimum value.
LINEARITY AND CONVEXITY 137
_* > _+
Since y , 2 are fixed, and w is a variable point (of C), therefore (16:3) defined
a hyperplane and one of the half spaces produced by it. And it is easy
to verify that z belongs to this hyperplane, and that it consists of those
points u for which the angle formed by the three points is a rightangle
(i.e. for which the vectors z y and u z are orthogonal). This means,
Interval ( x , u J
Part of the interval nearer to y than *
^ Hyperplane of (1 3)
Figure 25.
Hyperplane of (16:4)
Hyperplane of (16:3)
Figure 26.
n
indeed, that JJ (zi yi)(ui z>) = 0. Clearly all of C must lie on this
i
hyperplane, or on that side of it which is away from y . If any point u
of C did lie on the y side, then some points of the interval [ z , u ] would be
nearer to y than z is. (Cf. Figure 25. The computation on pp. 135136
>
properly interpreted shows precisely this.) Since C contains z and u ,
and so all of [ z , u], this would contradict the statement that z is as near
to y as possible in C.
Now our passage from (16:3) to (16:4) amounts to a parallel shift of
this hyperplane from z to y (parallel, because the coefficients a = y
138 ZEROSUM TWOPERSON GAMES: THEORY
of Uiy i = 1, , n are unaltered). Now y lies on the hyperplane, and
all of C in one halfspace produced by it (Figure 26).
The case n = 3 (space) could be visualized in a similar way.
It is even possible to account for a general n in this geometrical manner.
If the reader can persuade himself that he possesses ndimensional "geo
metrical intuition " he may accept the above as a proof which is equally
valid in n dimensions. It is even possible to avoid this by arguing as
follows: Whatever n, the entire proof deals with only three points at once,
namely y , z , u . Now it is always possible to lay a (2dimensional) plane
through three given points. If we consider only the situation in this
plane, then Figures 2426 and the associated argument can be used without
any reinterpretation.
Be this as it may, the purely algebraic proof given above is absolutely
rigorous at any rate. We gave the geometrical analogies mainly in the
hope that they may facilitate the understanding of the algebraic operations
performed in that proof.
16.4. The Theorem of the Alternative for Matrices
16.4.1. The theorem (16 :B) of 16.3. permits an inference which will be
fundamental for our subsequent work.
We start by considering a rectangular matrix in the sense of 13.3.3.
with n rows and m columns, and the matrix element a(t, j). (Cf. Figure 11
in 13.3.3. The <, z, y, t, s there correspond to our a, i, j, n, m.) I.e.
a(t, j) is a perfectly arbitrary function of the two variables i = 1, , n\
j 1, , m. Next we form certain vectors in L n : For each j' = 1, ,
m the vector x 1 = {x\, , x' n ] with x{ = a(i, j) and for each I = 1,
, n the coordinate vector 5 l = {B l i}. (Cf. for the latter the end of
16.1.3.; we have replaced the j there by our 1.) Let us now apply the
theorem (16 :B) of 16.3. for p = n + m to these n + m vectors x ',,
x m , d ',, 6 n . (They replace the x ',, x p loc. cit.) We put
7=7.
>
The convex C spanned by x ',, x m , 6 ',, 6 n may contain 0.
If this is the case, then we can conclude from (16:2:d) in 16.2.2. that
*/7;>+ i>7< =K
;  1 f1
with
(16:5) *! ^ 0, , t m ^ 0, 81 ^ 0,  , s n ^ 0.
(16:6) t, : + Sl = 1.
LINEARITY AND CONVEXITY 139
<!,* , Jm, i, * ' , s n replace the t\ t , t p (loc. cit.). In terms of the
components this means
m n
2) *x*> j) + 2} */** = o.
y=i ji
The second term on the lefthand side is equal to s, so we write
(16:7) a(i,j)ti= ,.
yi
m
If we had % tj = 0, then ti = = t m = 0, hence by (16:7) s l = =
yi
m
s n = 0, thus contradicting (16:6). Hence 2) */ >  We replace (16:7)
yi
by its corollary
(16:8) aftJHSO.
2^ ^ for j = 1, , m. Then we have 2} */ = 1
ji yi
(16:5) gives x\ ^ 0, , x m ^ 0. Hence
(16:9) x = jxi, , x m \ belongs to S m
and (16:8) gives
m
U6:10) 2 *(*'/)*/ ^ for f = 1, , n.
Consider, on the other hand, the possibility that C does not contain .
Then the theorem (16:B) of 16.3. permits us to infer the existence of a
hyperplane which contains y (cf. (16:2:a) in 16.2.1.), such that all of C
is contained in one halfspace produced by that hyperplane (cf. (16:2:b) in
16.2.1.). Denote this hyperplane by
n
2} a t Xi = 6.
ti
Since belongs to it, therefore 6 = 0. So the half space in question is
n
(16:11) o,^ > 0.
140 ZEROSUM TWOPERSON GAMES: THEORY
x ',*, x m , d ',*, 6 n belong to this half space. Stating this for
 n
5 ', (16:11) becomes o t $,j > 0, i.e. ai > 0. So we have
i
(16:12) ai > 0, , a n > 0.
Stating it for x J , (16:11) becomes
n
(16:13) J) a(t,j)a< > 0.
ii
/n n
2) 0* for i = 1, , n. Then we have ti\ = 1 and
i ti
(16:12) gives wi > 0, , w n > 0. Hence
(16:14) w = {wi, , w n \ belongs to S n .
And (16:13) gives
m
(16:15) a(i t j)Wi ^ for j = 1, , m.
%  i
Summing up (16:9), (16:10), (16:14), (16:15), we may state:
(16 :C) Let a rectangular matrix with n rows and m columns be
given. Denote its matrix element by a(i, j), i = 1, , n;
j = 1, , m. Then there exists either a vector x =
[xi,    , x m ] inS m with
m
(16:16:a) a(t, j>, ^0 for i = 1, , n,
/i
or a vector w = {101, , w n ] in <S n with
n
(16:16:b) a(i, j)w< > for j = 1, , m.
i
We observe further:
The two alternatives (16:16:a), (16:16:b) exclude each other.
Proof: Assume both (16:16:a) and (16:16:b). Multiply each (16:16:a) by
n m
Wi and sum over i = 1, ,n; this gives % a(i, j)w&j ^ 0. Multiply
LINEARITY AND CONVEXITY 141
each (16:16:b) by x, and sum over j = 1, , w; this gives
tiyi
Thus we have a contradiction.
16.4.2. We replace the matrix a(i, j) by its negative transposed matrix;
i.e. we denote the columns (and not, as before, the rows) by i = 1, , n
and the rows (and not, as before, the columns) by j = 1, , m. And
we let the matrix element be a(t, j) (and not, as before a(i, j)). (Thus
n, m too are interchanged.)
We restate now the final results of 16.4.1. as applied to this new matrix.
But in formulating them, we let x ' = {x\, , x' m \ play the role which
w = [wi, , w n ] had, and w' = {w(,   , w' n \ the role which x =*
[x\y  , m } had. And we announce the result in terms of the original
matrix.
Then we have:
(16 :D) Let a rectangular matrix with n rows and m columns be
given. Denote its matrix element by a(i, j), i = 1, , n;
j = 1, , m. Then there exists either a vector x' =
(x'j, , x'J in S m with
m
(16:17:a) a(i, j)x' ; < for t = 1,  , n,
yi
or a vector w' = {w(,    , u^) in S n with
n
(16:17 :b) & JX ^0 f r
il
And the two alternatives exclude each other.
16.4.3. We now combine the results of 16.4.1. and 16.4.2. They imply
that we must have (16:17:a), or (16:16:b), or (16:16:a) and (16:17:b)
simultaneously; and also that these three possibilities exclude each other.
Using the same matrix a(t, j) but writing x , w , x\ w ' for the vectors
x', w y x , w' in 16.4.1., 16.4.2. we obtain this:
1 > and not only < 0. Indeed, would necessitate x\ = x m * which
m
it impossible since J) *, 1.
142 ZEROSUM TWOPERSON GAMES: THEORY
(16:E) There exists either a vector x = \x\,    , z m j in S m with
m
(16:18:a) a(t, ;>, < for t = 1, , n,
>i
or a vector w = [wi,  , w n j in S n with
n
(16:18:b) a(i, j)wi > for j = 1, , m,
; = i
ortwo vectors x' = [x(,   ,x' m \ inS m and w f = \w' lt  ,u/ n j
in & with
for i = 1, , n,
(16:18:c) '"'
2) a(i, j)^ ^0 for j = 1, , m.
ti
The three alternatives (16:18:a), (16:8:b), (16:8:c) exclude each
other.
By combining (16:18:a) and (16:18:c) on one hand and (16:18:b) and
(16:18:c) on the other, we get this simpler but weaker statement. 1 ' 2
(16:F) There exists either a vector x = [xi,    , x m \ in S m with
m
(16:19:a) % a(i, j)x f ^0 for i = 1, , n,
yi
or a vector w = {wi,   , w n \ in S n with
n
(16:19:b) J) a(ij)w, ^0 for j = 1, , m.
ii
16.4.4. Consider now a skew symmetric matrix a(i, j), i.e. one which
coincides with its negative transposed in the sense of 16.4.2. ; i.e. n = m and
ofe j) = aQ', for i, j = 1, , n.
1 The two alternatives (16:19:a), (16:19:b) do not exclude each other: Their conjunc
tion is precisely (16:18:c).
8 This result could also have been obtained directly from the final result of 16.4.1.:
(16:19:a) is precisely (16:16:a) there, and (16:19:b) is a weakened form of (16:16:b)
there. We gave the above more detailed discussion because it gives a better insight
into the entire situation.
MIXED STRATEGIES. THE SOLUTION 143
Then the conditions (16:19:a) and (16:19:b) in 16.4.3. express the same
thing: Indeed, (16:19:b) is
n
2) a(i,j)w< ^ 0;
this may be written
n n
n
We need only write jf, i for i, j l so that this becomes ^ a(i, j)Wj ^ 0, and
._ ^ ^ n
then x for 10, l so that we have V a(i, j)xj g 0. And this is precisely
Therefore we can replace the disjunction of (16:19:a) and (16:19:b)
by either one of them, say by (16:19:b). So we obtain:
(16 :G) If the matrix a(i, j) is skewsymmetric (and therefore n = m
cf. above), then there exists a vector w = {wi, , w n } in
S n with
2) a(i,j)wi ^0 for j = 1, , n.
ti
17. Mixed Strategies. The Solution for All Games
17.1. Discussion of Two Elementary Examples
17.1.1. In order to overcome the difficulties in the nonstrictly determined
case which we observed particularly in 14.7. it is best to reconsider the
simplest examples of this phenomenon. These are the games of Matching
Pennies and of Stone, Paper, Scissors (cf. 14.7.2., 14.7.3.). Since an
empirical, commonsense attitude with respect to the "problems" of these
games exists, we may hope to get a clue for the solution of nonstrictly
determined (zerosum twoperson) games by observing and analyzing these
attitudes.
It was pointed out that, e.g. in Matching Pennies, no particular way
of playing i.e. neither playing "heads" nor playing "tails" is any
better than the other, and all that matters is to find out the opponent's inten
tions. This seems to block the way to a solution, since the rules of the
game in question explicitly bar each player from the knowledge about the
opponent's actions, at the moment when he has to make his choice. But
1 Observe that now, with n = m this is only a change in notation !
144 ZERO SUM TWOPERSON GAMES: THEORY
the above observation does not correspond fully to the realities of the case:
In playing Matching Pennies against an at least moderately intelligent
opponent, the player will not attempt to find out the opponent's intentions
but will concentrate on avoiding having his own intentions found out, by
playing irregularly "heads" and " tails" in successive games. Since we
wish to describe the strategy in one play indeed we must discuss the course
in one play and not that of a sequence of successive plays it is preferable
to express this as follows: The player's strategy consists neither of playing
"tails" nor of playing "heads," but of playing "tails" with the probability
of i and "heads" with the probability of .
17.1.2. One might imagine that in order to play Matching Pennies in a
rational way the player will before his choice in each play decide by
some 50:50 chance device whether to play "heads" or "tails." 1 The
point is that this procedure protects him from loss. Indeed, whatever
strategy the opponent follows, the player's expectation for the outcome
of the play will be zero. 2 This is true in particular if with certainty the
opponent plays "tails," and also if with certainty he plays "heads"; and
also, finally, if he like the player himself may play both "heads" and
"tails," with certain probabilities. 3
Thus, if we permit a player in Matching Pennies to use a "statistical"
strategy, i.e. to "mix" the possible ways of playing with certain proba
bilities (chosen by him), then he can protect himself against loss. Indeed,
we specified above such a statistical strategy with which he cannot lose,
irrespective of what his opponent does. The same is true for the opponent,
i.e. the opponent can use a statistical strategy which prevents the player
from winning, irrespective of what the player does. 4
The reader will observe the great similarity of this with the discussions of
14.5. 5 In the spirit of those discussions it seems legitimate to consider
zero as the value of a play of Matching Pennies and the 50 : 50 statistical
mixture of "heads" and "tails" as a good strategy.
The situation in Paper, Stone, Scissors is entirely similar. Common
sense will tell that the good way of playing is to play all three alternatives
with the probabilities of  each. 6 The value of a play as well as the inter
1 E.g. he could throw a die of course without letting the opponent see the result
and then play "tails" if the number of spots showing is even, and "heads" if that num
ber is odd.
1 I.e. his probability of winning equals his probability of losing, because under these
conditions the probability of matching as well as that of not matching will be J, what
ever the opponent's conduct.
3 Say p, I p. For the player himself we used the probabilities J, J.
4 All this, of course, in the statistical sense: that the player cannot lose, means that
his probability of losing is his probability of winning. That he cannot win, means
that the former is 2 to the latter. Actually each play will be won or lost, since Matching
Pennies knows no ties.
We mean specifically (14:C:d), (14:C:e) in 14.5.1.
A chance device could be introduced as before. The die mentioned in footnote 1,
above, would be a possible one. E.g. the player could decide "stone" if 1 or 2 spots
show, "paper" is 3 or 4 spots show, "scissors" if 5 or 6 show.
MIXED STRATEGIES. THE SOLUTION 145
pretation of the above strategy as a good one can be motivated as before,
again in the sense of the quotation there, 1
17.2. Generalization of This Viewpoint
17.2.1. It is plausible to try to extend the results found for Matching
Pennies and Stone, Paper, Scissors to all zerosum twoperson games.
We use the normalized form, the possible choices of the two players
being n = 1, , Pi and r 2 = 1, , 2 , and the outcome for player 1
3C(ri, r 2 ), as formerly. We make no assumption of strict determinateness.
Let us now try to repeat the procedure which was successful in 17.1. ; i.e.
let us again visualize players whose "theory" of the game consists not in
the choice of definite strategies but rather in the choice of several strategies
with definite probabilities. 2 Thus player 1 will not choose a number
TI = 1, , pi i.e. the corresponding strategy 2^ but 0i numbers
i, , iflj the probabilities of these strategies Si, , 2J?, respec
tively. Equally player 2 will not choose a number TJ = 1, , 2 i.e.
the corresponding strategy S but /9 2 numbers iji, , ifo s the proba
bilities of these strategies 2' 2 , * ' * > 2, respectively. Since these prob
abilities belong to disjoint but exhaustive alternatives, the numbers T , *?r,
are subject to the conditions
(17:l:a) all ^ fe 0, { T = 1;
rii
(17:1 :b) all^^O, ^ = 1.
r,l
and to no others.
We form the vectors = {1, , &J and ty = {771, , J?0 t }.
>
Then the above conditions state that { must belong to Sp^ and rj to S0 t
in the sense of 16.2.2.
In this setup a player does not, as previously, choose his strategy, but
he plays all possible strategies and chooses only the probabilities with which
he is going to play them respectively. This generalization meets the major
difficulty of the not strictly determined case to a certain point: We have
seen that the characteristic of that case was that it constituted a definite
disadvantage 3 for each player to have his intentions found out by his
1 In Stone, Paper, Scissors there exists a tie, but no loss still means that the probability
of losing is ^ the probability of winning, and no gain means the reverse. Cf . footnote
4 on p. 144.
8 That these probabilities were the same for all strategies (i, 1 or J, , i in the exam
ples of the last paragraph) was, of course accidental. It is to be expected that this
equality was due to the symmetric way in which the various alternatives appeared in
those games. We proceed now on the assumption that the "appearance of probabilities
in formulating a strategy was the essential thing, while the particular values were
accidental.
'The A > Oof 14.7.1.
146 ZEROSUM TWOPERSON GAMES: THEORY
opponent. Thus one important consideration 1 for a player in such a game
is to protect himself against having his intentions found out by his opponent.
Playing several different strategies at random, so that only their probabili
ties are determined, is a very effective way to achieve a degree of such
protection: By this device the opponent cannot possibly find out what the
player's strategy is going to be, since the player does not know it himself. 2
Ignorance is obviously a very good safeguard against disclosing information
directly or indirectly.
17.2.2. It may now seem that we have incidentally restricted the
player's freedom of action. It may happen, after all, that he wishes to
play one definite strategy to the exclusion of all others; or that, while desir
ing to use certain strategies with certain probabilities, he wants to exclude
absolutely the remaining ones. 3 We emphasize that these possibilities are
perfectly within the scope of our scheme. A player who does not wish to
play certain strategies at all will simply choose for them the probabilities
zero. A player who wishes to play one strategy to the exclusion of all
others will choose for this strategy the probability 1 and for all other
strategies the probability zero.
Thus if player 1 wishes to play the strategy 2\i only, he will choose for
the coordinate vector 6 T i (cf. 16.1.3.). Similarly for player 2, the strategy
2 and the vectors T? and 6 T .
>
In view of all these considerations we call a vector of S$ i or a vector
ry of S/3 t a statistical or mixed strategy of player 1 or 2, respectively. The
>
coordinate vectors 5 T i or 6 T * correspond, as we saw, to the original strategies
TI or r 2 i.e. 2^i or S^ of player 1 or 2, respectively. We call them strict
or pure strategies.
17.3. Justification of the Procedure As Applied to an Individual Play
17.3.1. At this stage the reader may have become uneasy and perceive
a contradiction between two viewpoints which we have stressed as equally
vital throughout our discussions. On the one hand we have always insisted
that our theory is a static one (cf. 4.8.2.), and that we analyze the course
1 But not necessarily the only one.
2 If the opponent has enough statistical experience about the player's " style, " or if
he is very shrewd in rationalizing his expected behavior, he may discover the probabilities
frequencies of the various strategies. (We need not discuss whether and how this
may happen. Cf. the argument of 17.3.1.) But by the very concept of probability
and randomness nobody under any conditions can foresee what will actually happen in
any particular case. (Exception must be made for such probabilities as may vanish;
cf. below.)
8 In this case he clearly increases the danger of having his strategy found out by the
opponent. But it may be that the strategy or strategies in question have such intrinsic
advantages over the others as to make this worth while. This happens, e.g. in an
extreme form for the "good" strategies of the strictly determined case (cf. 14.5., particu
larly (14:C:a), (14:C:b) in 14.5.2.).
MIXED STRATEGIES. THE SOLUTION 147
of one play and not that of a sequence of successive plays (cf. 17.1.). But
on the other hand we have placed considerations concerning the danger of
one's strategy being found out by the opponent into an absolutely central
position (cf. 14.4., 14.7.1. and again the last part of 17.2.). How can
the strategy of a player particularly one who plays a random mixture of
several different strategies be found out if not by continued observation!
We have luled out that this observation should extend over many plays.
Thus it would seem necessary to carry it out in a single play. Now even
if the rules of the game should be such as to make this possible i.e. if they
lead to long and repetitious plays the observation would be effected only
gradually and successively in the course of the play. It would not be avail
able at the beginning. And the whole thing would be tied up with various
dynamical considerations, while we insisted on a static theory ! Besides,
the rules of the game may not even give such opportunities for observation; 1
they certainly do not in our original examples of Matching Pennies, and
Stone, Paper, Scissors. These conflicts and contradictions occur both in the
discussions of 14. where we used no probabilities in connection with the
choice of a strategy and in our present discussions of 17. where probabilities
will be used.
How are they to be solved?
17.3.2. Our answer is this:
To begin with, the ultimate proof of the results obtained in 14. and 17.
i.e. the discussions of 14.5. and of 17.8. do not contain any of these con
flicting elements. So we could answer that our final proofs are correct
even though the heuristic procedures which lead to them are questionable.
But even these procedures can be justified. We make no concessions:
Our viewpoint is static and we are analyzing only a single play. We are
trying to find a satisfactory theory, at 'this stage for the zerosum two
person game. Consequently we are not arguing deductively from the firm
basis of an existing theory which has already stood all reasonable tests
but we are searching for such a theory. 2 Now in doing this, it is perfectly
legitimate for us to use the conventional tools of logics, and in particular
that of the indirect proof. This consists in imagining that we have a satis
factory theory of a certain desired type, 3 trying to picture the consequences
of this imaginary intellectual situation, and then in drawing conclusions
from this as to what the hypothetical theory must be like in detail. If this
process is applied successfully, it may narrow the possibilities for the hypo
thetical theory of the type in question to such an extent that only one
1 I.e. "gradual," "successive" observations of the behavior of the opponent within
one play.
1 Our method is, of course, the empirical one: We are trying to understand, formalize
and generalize those features of the simplest games which impress us as typical. This is,
after all, the standard method of all sciences with an empirical basis.
3 This is full cognizance of the fact that we do not (yet) possess one, and that we
cannot imagine (yet) what it would be like, if we had one.
All this is in its own domain no worse than any other indirect proof in any part of
science (e.g. the per absurdum proofs in mathematics and in physics).
148 ZEROSUM TWOPERSON GAMES: THEORY
possibility is left, i.e. that the theory is determined, discovered by this
device. 1 Of course, it can happen that the application is even more " suc
cessful," and that it narrows the possibilities down to nothing i.e. that it
demonstrates that a consistent theory of the desired kind is inconceivable. 2
17.3.3. Let us now imagine that there exists a complete theory of the
zerosum twoperson game which tells a player what to do, and which is
absolutely convincing. If the players knew such a theory then each player
would have to assume that his strategy has been " found out" by his oppo
nent. The opponent knows the theory, and he knows that a player would be
unwise not to follow it. 8 Thus the hypothesis of the existence of a satis
factory theory legitimatizes our investigation of the situation when a play
er's strategy is " found out" by his opponent. And a satisfactory theory 4
can exist only if we are able to harmonize the two extremes FI and F 2 ,
strategies of player 1 "found out" or of player 2 "found out."
For the original treatment free from probability (i.e. with pure strate
gies) the extent to which this can be done was determined in 14.5.
We saw that the strictly determined case is the one where there exists a
theory satisfactory on that basis. We are now trying to push further, by
using probabilities (i.e. with mixed strategies). The same device which
we used in 14.5. when there were no probabilities will do again, the analysis
of "finding out" the strategy of the other player.
It will turn out that this time the hypothetical theory can be determined
completely and in all cases (not merely for the strictly determined case
cf. 17.5.1., 17.6.).
After the theory is found we must justify it independently by a direct
argument. 6 This was done for the strictly determined case in 14.5., and
we shall do it for the present complete theory in 17.8.
1 There are several important examples of this performance in physics. The succes
sive approaches to Special and to General Relativity or to Wave Mechanics may be
viewed as such. Cf. A. D'Abro: The Decline of Mechanism in Modern Physics, New
York 1939.
1 This too occurs in physics. The N. BohrHeisenberg analysis of "quantities which
are not simultaneously observable " in Quantum Mechanics permits this interpretation.
Cf . N. Bohr: Atomic Theory and the Description of Nature, Cambridge 1934 and P. A.M.
Dirac: The Principles of Quantum Mechanics, London 1931, Chap. I.
8 Why it would be unwise not to follow it is none of our concern at present; we have
assumed that the theory is absolutely convincing.
That this is not impossible will appear from our final result. We shall find a theory
which is satisfactory ; nevertheless it implies that the player's strategy is found out by his
opponent. But the theory gives him the directions which permit him to adjust himself
so that this causes no loss. (Cf. the theorem of 17.6. and the discussion of our complete
solution in 17.8.)
4 I.e. a theory using our present devices only. Of course we do not pretend to be
able to make "absolute" statements. If our present requirements should turn out to be
unfulfillable we should have to look for another basis for a theory. We have actually
done this once by passing from 14. (with pure strategies) to 17. (with mixed strategies).
* The indirect argument, as outlined above, gives only necessary conditions. Hence
it may establish absurdity (per absurdum proof), or narrow down the possibilities to one;
but in the latter case it is still necessary to show that the one remaining possibility is
satisfactory.
MIXED STRATEGIES. THE SOLUTION 149
17.4. The Minorant and the Majorant Games (For Mixed Strategies)
17.4.1. Our present picture is then that player 1 chooses an arbitrary
element from S$ and that player 2 chooses an arbitrary element i? from
s f ,
Thus if player 1 wishes to play the strategy 2^ only, he will choose for
the coordinate vector d T i (cf. 16.1.3.) ; similarly for player 2, the strategy
ZS and the vectors rj and 6 T .
We imagine again that player 1 makes his choice of in ignorance of
player 2's choice of y and vice versa.
The meaning is, of course, that when these choices have been made
player 1 will actually use (every) TI = 1, , ft\ with the probabilities
TI and the player 2 will use (every) TI = 1, , 0i with the probabilities
iy v Since their choices are independent, the mathematical expectation
of the outcome is
 > 0i 0t
(17:2) K( , n ) = 3C(n, n)^,.
nlr.l
In other words, we have replaced the original game T by a new one of
essentially the same structure, but with the following formal differencei:
The numbers n, r 2 the choices of the players are replaced by the vectors
, q . The function 5C(ri, r 2 ) the outcome, or rather the " mathematical
>
expectation " of the outcome of a play is replaced by K( , 17 ). All these
considerations demonstrate the identity of structure of our present view
of T with that of 14.1.2., the sole difference being the replacement of
TI, T 2 , 3C(n, TJ) by , t\ , K( , t\ ), described above. This isomorphism
suggests the application of the same devices which we used on the original
T, the comparison with the majorant and minorant games FI and F* as
described in 14.2., 14.3.1., 14.3.3.
17.4.2. Thus in TI player 1 chooses his first and player 2 chooses his ij
afterwards in full knowledge of the chosen by his opponent. In Tj the
order of their choices is reversed. So the discussion of 14.3.1. applied
literally. Player 1, choosing a certain , may expect that player 2 will
choose his /j , so as to minimize K( , 17 ) ; i.e. player Ts choice of leads to
>
the value Min~* K( , 17 ). This is a function of alone; hence player 1
should choose his so as to maximize Min+ K( , ij ). Thus the value of
a play of TI is (for player 1)
150 ZEROSUM TWOPERSON GAMES: THEORY
vj = Max Min> K(T, 7).
Similarly the value of a play of F 2 (for player 1) turns out to be
v' 2 = Min> Max* K( , rj ).
*? f
(The apparent assumption of rational behavior of the opponent does not
really matter, since the justifications (14:A:a)(14:A:e), (14:B:a)(14:B:e)
of 14.3.1. and 14.3.3. again apply literally.)
As in 14.4.1. we can argue that the obvious fact that FI is less favorable
for player 1 than F 2 constitutes a proof of
vi
and that if this is questioned, a rigorous proof is contained in (13 :A*) in
13.4.3. The x, y, there correspond to our , 17 , K. l If it should happen
that
v'i
then the considerations of 14.5. apply literally. The arguments (14:C:a)
(14:C:f), (14:D:a), (14:D:b) loc. cit., determine the concept of a " good" {
and i? and fix the "value" of a play of (for the player 1) at
v' = v { = v' 2 . 2
All this happens by (13:B*) in 13.4.3. if and only if a saddle point of K
>
exists. (The x, y, </> there correspond to our , rj , K.)
17.5. General Strict Determinateness
17.5.1. We have replaced the YI, v 2 of (14:A:c) and (14:B:c) by our
present v{, v' 2 , and the above discussion shows that the latter can perform
the functions of the former. But we are just as much dependent upon
v[ = v 2 as we were then upon v l = v 2 . It is natural to ask, therefore,
whether there is any gain in this substitution.
Evidently this is the case if, as and when there is a better prospect
of having vj = v' 2 (for any given F) than of having vi = v 2 . We called F
strictly determined when Vi = v 2 ; it now seems preferable to make a dis
tinction and to designate F for Vi = v 2 as specially strictly determined, and
for vi = v 2 as generally strictly determined. This nomenclature is justified
only provided we can show that the former implies the latter.
> >
1 Although , 17 are vectors, i.e. sequences of real numbers (1, ,$ and
'nit ' ' ' 1 10 ) it is perfectly admissible to view each as a single variable in the maxima and
minima which we are now forming. Their domains are, of course, the sets 8ft , Sft
which we introduced in 17.2.
8 For an exhaustive repetition of the arguments in question cf. 17.8.
MIXED STRATEGIES. THE SOLUTION 151
This implication is plausible by common sense : Our introduction of mixed
strategies has increased the player's ability to defend himself against having
his strategy found out; so it may be expected that vj, v 2 actually lie between
Vi, v 2 . For this reason one may even assert that
(17:3) V! ^ vl g v' 2 g v 2 .
(This inequality secures, of course, the implication just mentioned.)
To exclude all possibility of doubt we shall give a rigorous proof of
(17:3). It is convenient to prove this as a corollary of another lemma.
17.5.2. First we prove this lemma:
(17:A) For every { in S^
*  ft *
Min K( f , r, ) = Min 3C(n,
= Min Tj JC(n,
r^l
For every 77 in Sp t
> 0i ft
Max K( , 77 ) = Max
nlr.l
/5i
= Max Ti ^ JC(TI, r 2 )r/ Tj .
r z l
Proof: We prove the first formula only ; the proof of the second is exactly
the same, only interchanging Max and Min as well as ^ and ^ .
> > ,
Consideration of the special vector r? = 6 T i (cf. 16.1.3. and the end of
17.2.) gives
0! ft 01 ft ^1
Min rcd^rOfc^g S^i>^)Mv;= Z X ( T1 > r ')k'
rjl T t l T!! T 2 = l r, = l
Since this is true for all r' 2 , so
01 ft 01
(17:4:a) Min ^ JC(n, njfc^ g Min r ; ^ 3C(n, r;)f v
1
On the other hand, for all r 2
ft
ac(n,
Given any 77 in S0 8 , multiply this by r\ T ^ and sum over r 2 = 1,
a
Since ^ y Ta = 1, therefore
r.l
152 ZEROSUM TWOPERSON GAMES: THEORY
ti f t fi
3C(r,, r t )^ r , Min,, JC(n f r,){ r ,
T!! r,l fi1
results. Since this is true for all i; , so
0i 0t fti
(17:4:b) Min 3C(n, r.)^*, ^ Min r , % OC(n, r) v
(17:4:a), (17:4:b) yield together the desired relation.
If we combine the above formulae with the definition of v'^ v' 2 in 17.4.,
then we obtain
0i
(17:5:a) vi = Max Min Tt K(r l9 T,) V
(17:5:b) v = Min Max T ?
These formulae have a simple verbal interpretation: In computing v' t we
need only to give player 1 the protection against having his strategy found
out which lies in the use of (instead of TI) ; player 2 might as well proceed
in the old way and use r 2 (and not 17 ). In computing vj the roles are inter
changed. This is plausible by commonsense: v{ belongs to the game FI
(cf. 17.4. and 14.2.); there player 2 chooses after player 1 and is fully
informed about the choice of player 1, hence he needs no protection against
having his strategy found out by player 1. For vi which belongs to the
game Tj (cf. id.) the roles are interchanged.
Now the value of v( becomes ^ if we restrict the variability of in
the Max of the above formula. Let us restrict it to the vectors = d r '\
(rj = !,,/>!, cf. 16.1.3. and the end of 17.2.). Since
0i
X fri, ri)*r/i = 3e(r' lf ri),
TII
this replaces our expression by
Max/ Min Tj $(/!, T Z ) = VL
So we have shown that
vi ^ v;.
Similarly (cf . the remark at the beginning of the proof of our lemma above)
restriction of 77 to the rj = 8 T establishes
v vj.
MIXED STRATEGIES. THE SOLUTION 163
Together with v{ ^ v (cf. 17.4.), these inequalities prove
(17:3) Vl g v; ^ vi g v f ,
as desired.
17.6. Proof of the Main Theorem
17.6. We have established that general strict determinateness (v( = vj)
holds in all cases of special strict determinateness (vi = v 2 ) us is to be
expected. That it holds in some further cases as well i.e. that we can
have Vi = v' 2 but not Vi = v 2 is clear from our discussions of Matching
Pennies and Stone, Paper, Scissors. 1 Thus we may say, in the sense of
17.5.1. that the passage from special to general strict determinateness does
constitute an advance. But for all we know at this moment this advance
may not cover the entire ground which should be controlled; it could hap
pen that certain games F are not even generally strictly determined, i.e.
we have not yet excluded the possibility
vl < v 2 .
If this possibility should occur, then all that was said in 14.7.1. would
apply again and to an increased extent : finding out one's opponent's strategy
would constitute a definite advantage
A' = v 2  vj > 0,
and it would be difficult to see how a theory of the game should be con
structed without some additional hypotheses as to "who finds out whose
strategy."
The decisive fact is, therefore, that it can be shown that this never
happens. For all games F
v; = v 2
i.e.
(17:6) Max Min K(7, "7 ) = Min Max K(7, 7),
or equivalently (again use (13:B*) in 13.4.3. the x, y, <t> there corresponding
to our , q , K) : A saddle point of K( ( , rj ) exists.
This is a general theorem valid for all functions K( { , ?? ) of the form
(17:2) K(7, 7) =  W(TI, r,) r>i .
TilT,!
The coefficients 3C(ri, r t ) are absolutely unrestricted; they form, as described
* *
in 14.1.3. a perfectly arbitrary matrix. The variables $ , 17 are really
1 In both games YI 1, vi 1 (cf. 14.7.2., 14.7.3.), while the discussion of 17.1.
can be interpreted as establishing v{ vj 0.
154 ZEROSUM TWOPERSON GAMES: THEORY
sequences of real numbers {i, , ^ and rji, , <]^\ their domains
>
being the sets S^, S0 t (cf. footnote 1 on p. 150). The functions K( , 1? )
of the form (17:2) are called bilinear forms.
With the help of the results of 16.4.3. the proof is easy. 1 This is it:
We apply (16:19:a), (16:19:b) in 16.4.3. replacing the i, j, n, m, a(i y j)
there by our TI, r 2 , 0i, 2, 3C(ri, r 2 ) and the vectors w , x there by our , ry .
If (16:19:b) holds, then we have a in Sp i with
0i
2j 3C(Ti, TI){ TI ^ for r 2 = 1, , 8,,
ni
i.e. with
0i
Min Tj J JC(n, r 2 ) T ^ 0.
Therefore the formula (17:5:a) of 17.5.2. gives
vl ^ 0.
If (16:1 9 :a) holds, then we have an rj in Sp 2 with
0*
2) 3C(ri, rz)^ ^ for n = 1, , 0,,
r,l
1 This theorem occurred and was proved first in the original publication of one of the
authors on the theory of games: /. von Neumann: "Zur Theorie der Gesellschaftsspiele,"
Math. Annalen, Vol. 100 (1928), pp. 295320.
A slightly more general form of this MinMax problem arises in another question of
mathematical economics in connection with the equations of production :
J. von Neumann: "tlber ein okonomisches Gleichungssystem und eine Verall
gemeinerung des Brouwer'schen Fixpunktsatzes," Ergebnisse eines Math. Kolloquiums,
Vol. 8 (1937), pp. 7383.
It seems worth remarking that two widely different problems related to mathe
matical economics although discussed by entirely different methods lead to the same
mathematical problem, and at that to one of a rather uncommon type: The " MinMax
type." There may be some deeper formal connections here, as well as in some other
directions, mentioned in the second paper. The subject should be clarified further.
The proof of our theorem, given in the first paper, made a rather involved use of
some topology and of functional calculus. The second paper contained a different proof,
which was fully topological and connected the theorem with an important device of
that discipline: the socalled " Fixed Point Theorem" of L. E. J. Brouwer. This aspect
was further clarified and the proof simplified by S. Kakutani: "A Generalization of
rtrouwer's Fixed Point Theorem," Duke Math. Journal, Vol. 8 (1941), pp. 457459.
All these proofs are definitely nonelementary. The first elementary one was given
by J. Ville in the collection by E. Borel and collaborators, "Trait6 du Calcul des Prob
abilit& et de ses Applications," Vol. IV, 2: "Applications aux Jeux de Hasard," Paris
(1938), Note by J. Ville: "Sur la Theorie G6ne>ale des Jeux ou intervient THabilete* des
Joueurs," pp. 105113.
The proof which we are going to give carries the elementarization initiated by
/. Ville further, and seems to be particularly simple. The key to the procedure is, of
course, the connection with the theory of convexity in 16. and particularly with the
results of 16.4.3.
MIXED STRATEGIES. THE SOLUTION 155
i.e. with
ft*
Max T 3e(n, r 2 )i7 T ^ 0.
Therefore the formula (17:5:b) of 17.5.2. gives
v' 2 ^ 0.
So we see: Either vi ^ or v' 2 ^ 0, i.e.
(17:7) Never vi < < v' 2 .
Now choose an arbitrary number w and replace the function 3C(ri, r 2 )
byJC(n, r 2 )  w. 1
 _ > 0i 0* > >
This replaces K( , T; ) by K( , T; ) w ^ ] TI TJ V that is as , 17
r, = l r, = l
01 02 _ _
lie in S^, fi^ and so TI = ^ 77^ = 1 by K( , TJ ) w. Consequently
T!! r,l
vj, v 2 are replaced by vi w, v' 2 w. 2 Therefore application of (17:7) to
these vi w, v' 2 w gives
(17:8) Never vi < w < v' 2 .
Now w was perfectly arbitrary. Hence for vi < v' 2 it would be possible
to choose w with vi < w < v' 2 thus contradicting (17:8). So vi < v' 2 is
impossible, and we have proved that vi = v 2 as desired. This completes
the proof.
17.7. Comparison of the Treatments by Pure and by Mixed Strategies
17.7.1. Before going further let us once more consider the meaning of
the result of
The essential feature of this is that we have always v{ = v 2 but not always
Vi = v 2 , i.e. always general strict determinateness, but not always special
strict determinateness (cf. the beginning of 17.6.).
Or, to express it mathematically :
We have always
(17:9) Max Min K(7, 7) = Min Max K(7, 7),
, 6 v n
1 I.e. the game T is replaced by a new one which is played in precisely the same way as
T except that at the end player 1 gets less (and player 2 gets more) by the fixed amount w
than in T.
2 This is immediately clear if we remember the interpretation of the preceding
footnote.
156 ZEROSUM TWOPERSON GAMES: THEORY
i.e.
(17:10) MaxMin SC^rj)^ 
0i ft*
Min+Max> V T
n * ^ ^
Using (17:A) we may even write for this
(17:11) Max^Min rt ^ 3C(n,r 2 ) ri = Min^ Max, t % 3C(n,T 2 )i? v
But we do not always have
(17:12) Max T Min^^COn, r 2 ) = Min Ti Max Tj 3C(ri, r 2 ).
Let us compare (17:9) and (17:12): (17:9) is always true and (17:12)
is not. Yet the difference between these is merely that of , ij , K and n, n,
3C. Why does the substitution of the former for the latter convert the
untrue assertion (17:12) into the true assertion (17:9)?
The reason is that the5C(ri, r a ) of (17:12) is a perfectly arbitrary func
tion of its variables n, r 2 (cf. 14.1.3.), while the K( , rj ) of (17:9) is an
> >
extremely special function of its variables , ij i.e. of the i, , &
Vi; ' * * t *?/v name ly a bilinear form. (Cf. the first part of 17.6.) Thus
the absolute generality of JC(TI, r 2 ) renders any proof of (17:12) impossible,
while the special bilinear form nature of K( , rj ) provides the basis for
the proof of (17:9), as given in 17.6. l
17.7.2. While this is plausible it may seem paradoxical that K( , q)
should be more special than 3C(ri, r 2 ), although the former obtained from
the latter by a process which bore all the marks of a generalization: We
obtained it by the replacement of our original strict concept of a pure
strategy by the mixed strategies, as described in 17.2.; i.e. by the replace
ment of n, r 2 by , 17 .
But a closer inspection dispels this paradox. K( , i\ ) is a very special
function when compared with 3C(ri, r 2 ) ; but its variables have an enormously
1 That the K( , 17 ) is a bilinear form is due to our use of the " mathematical expecta
tion " wherever probabilities intervene. It seems significant that the linearity of this
concept is connected with the existence of a solution, in the sense in which we found one.
Mathematically this opens up a rather interesting perspective: One might investigate
which other concepts, in place of "mathematical expectation, 1 ' would not interfere with
our solution, i.e. with the result of 17.6. for zerosum twoperson games.
The concept of "mathematical expectation " is clearly a fundamental one in many
ways. Its significance from the point of view of the theory of utility was brought forth
particularly in 3.7.1.
MIXED STRATEGIES. THE SOLUTION 157
wider domain than the previous variables n, r 2 . Indeed T\ had the finite
set (1, , pi) for its domain, while varies over the set S^, which is a
(Pi l)dimensional surface in the 0idimensional linear space S^ (cf. the
>
end of 16.2.2. and 17.2.). Similarly for r 2 and rj . l
There are actually among the in S fti special points which correspond
to the various TI in (1, , 0i). Given such a r\ we can form (as in
16.1.3. and at the end of 17.2.) the coordinate vector = 6 T i, expressing
the choice of the strategy S^ to the exclusion of all others. We can corre
late special rj in S^ with the r 2 in (1, , fit) in the same way: Given such
>
a T 2 we can form the coordinate vector rj = 6 r , expressing the choice of the
strategy 2 2 to the exclusion of all others.
Now clearly:
SCW, T'I) *,**,
I /I
, r 2 ). 2
Thus the function K( , T; ) contains, in spite of its special character, the
entire function 3C(ri, r 2 ) and it is therefore really the more general concept
of the two, as it ought to be. It is actually more than 3C(ri, r 2 ) since not all
, 77 are of the special form 6 r , 5 %, not all mixed strategies are pure.*
One could say that K( , 17 ) is the extension of 3C(ri, r 2 ) from the narrower
domain of the ri,r 2 i.e.ofthe 5 T i, 6 r to the wider domain of the , rj i.e.
to all of Sp , Sp t from the pure strategies to the mixed strategies. The fact
that K( , TJ ) is a bilinear form expresses merely that this extension is carried
out by linear interpolation. That it is this process which must be used, is
of course due to the linear character of " mathematical expectation/* 4
1 Observe that =  i, , t l with the components TI , n = !, , 0i, also
contains r\ ; but there is a fundamental difference. In 3C(n, TJ), r\ itself is a variable. In
K( , 17 ), is a variable, while n is, so to say, a variable within the variable. is
actually a function of TI (cf. the end of 16.1.2.) and this function as such is the variable
of K( , q ). Similarly for r 2 and T; .
>
Or, in terms of n, r 2 : 3C(n, r*) is a function of n, TJ while K( , 17 ) is a function
of functions of n, * 2 (in the mathematical terminology : a functional).
2 The meaning of this formula is apparent if we consider what choice of strategies
> >
d r i, 5 r represent.
8 I.e. several strategies may be used effectively with positive probabilities.
4 The fundamental connection between the concept of numerical utility and the linear
" mathematical expectation" was pointed out at the end of 3.7.1.
158 ZEROSUM TWOPERSON GAMES: THEORY
17.7.3, Reverting to (17:9)(17:12), we see now that we can express the
truth of (17:9)(1711) and the untruth of (17:12) as follows:
(17:9), (17:10) express that each player is fully protected against having
his strategy found out by his opponent if he can use the mixed strategies
, ij instead of the pure strategies n, r 2 . (17:11) states that this remains
true if the player who finds out his opponent's strategy uses the T\, r 2
while only the player whose strategy is being found out enjoys the pro
tection of the , r; . The falsity of ( 1 7 : 1 2) , finally, shows that both players
and particularly the player whose strategy happens to be found out may
> >
not forego with impunity the protection of the { , rj .
17.8. Analysis of General Strict Determinateness
17.8.1. We shall now reformulate the contents of 14.5. as mentioned
at the end of 17.4. with particular consideration of the fact established
in 17.6. that every zerosum twoperson game F is generally strictly deter
mined. Owing to this result we may define:
v' = Max Min K(7,7) = Min Max K(7, 7)
= SaK(7, 7).
(Cf. also (13 :C*) in 13.5.2. and the end of 13.4.3.)
Let us form two sets A, B subsets of S^, S/^, respectively in analogy
to the definition of the sets A, B in (14:D:a), (14:D:b) of 14.5.1. These
are the sets A+, B+ of 13.5.1. (the </> corresponding to our K). We define:
(17:B:a) A is the set of those (in Sp ) for which Min K( , y )
*>
assumes its maximum value, i.e. for which
Min K(7, 7) = Max Min K(7, 7) = V.
* { *
(17:B:b) B is the set of those 77 (in Sp t ) for which Max K( , 7)
assumes its minimum value, i.e. for which
Max^ K(7, 7) = Min Max K(7, 7) = V.
It is now possible to repeat the argumentation of 14.5.
In doing this we shall use the homologous enumeration for the assertions
(14:C:a)(14:C:f)asinl4.5. 1
1 (a)(f) will therefore appear in an order different from the natural one. This was
equally true in 14.5., since the enumeration there was based upon that of 14.3.1., 14.3.3.,
and the argumentation in those paragraphs followed a somewhat different route.
MIXED STRATEGIES. THE SOLUTION 159
We observe first:
(17:C:d) Player 1 can, by playing appropriately, secure for himself
a gain ^ v', irrespective of what player 2 does.
Player 2 can, by playing appropriately, secure for himself
a gain ^ v', irrespective of what player 1 does.
Proof: Let player 1 choose { from A ; then irrespective of what player 2
does, i.e. for every T; we have K( , y ) ^ Min>K( { , rj ) = v'. Let player
i?
2 choose rj from B. Then irrespective of what player 1 does, i.e. for every
, we have K( , 77 ) g Max> K( { , rj ) = v'. This completes the proof.
Second, (17:C:d) is clearly equivalent to this:
(17:C:e) Player 2 can, by playing appropriately, make it sure that
the gain of player 1 is ^ v', i.e. prevent him from gaining
> v', irrespective of what player 1 does.
Player 1 can, by playing appropriately, make it sure that
the gain of player 2 is ^ v', i.e. prevent him from gaining
> v', irrespective of what player 2 does.
17.8.2. Third, we may now assert on the basis of (17:C:d) and (17:C:e)
and of the considerations in the proof of (17:C:d) that:
(17:C:a) The good way (combination of strategies) for 1 to play
the game F is to choose any belonging to A, A being the set
of (17:B:a) above.
(17:C:b) The good way (combination of strategies) for 2 to play
the game F is to choose any rj belonging to 5, B being the set
of (17:B:b) above.
Fourth, combination of the assertions of (17:C:d) or equally well
of those of (17:C:e) gives:
(17:C:c) If both players 1 and 2 play the game F well i.e. if {
belongs to A and ij belongs to B then the value of K( , rj )
will be equal to the value of a play (for 1), i.e. to v'.
We add the observation that (13:D*) in 13.5.2. and the remark concerning
the sets A, B before (17:B:a), (17:B:b) above give together this:
(17:C:f) Both players 1 and 2 play the game F well i.e. { belongs
to A and t; belongs to B if and only if , rj is a saddle point
of K(7, 7)!
160
ZEROSUM TWOPERSON GAMES: THEORY
All this should make it amply clear that v' may indeed be interpreted
as the value of a play of F (for 1), and that A, B contain the good ways of
playing T for 1, 2, respectively. There is nothing heuristic or uncertain
about the entire argumentation (17:C:a)(17:C:f). We have made no
extra hypotheses about the " intelligence " of the players, about "who
has found out whose strategy" etc. Nor are our results for one player
based upon any belief in the rational conduct of the other, a point the
importance of which we have repeatedly stressed. (Cf. the end of 4.1.2.;
also 15.8.3.)
17.9. Further Characteristics of Good Strategies
17.9.1. The last results (17 :C:c) and (17:C:f) in 17.8.2. give also a
simple explicit characterization of the elements of our present solution,
i.e. of the number v' and of the vector sets A and B.
By (17:C:c) loc. cit., A, B determine v'; hence we need only study A,
B, and we shall do this by means of (17:C:f) id.
According to that criterion, belongs to A, and t\ belongs to B if and
only if , i? is a saddle point of K( , rj ). This means that
Max K(7', 7
Min K(T, 7'
We make this explicit by using the expression (17:2) of 17.4.1. and
17.6. for K( , 17 ), and the expressions of the lemma (17:A) of 17.5.2. for
Max*, K( ', if) and Min K( , y ') Then our equations become:
I' V
Max T
r.lr.l
ac(r lf
Considering that ^ TI =
r,! '
Tj 1
[Max r ;
[Min r ; { ^
X
T t 1
Min T '
, we can also write for these
* _
. ^^ fin t \ It ^s
r,l
r,  1 r t  1
Now on the lefthand side of these equations the v tj Tf have coefficients
which are all ^ O. 1 The TI , T7 Tf themselves are also ^ 0. Hence these
1 Observe how the Max and Min occur there!
MIXED STRATEGIES. THE SOLUTION 181
equations hold only when all terms of their left hand sides vanish separately.
I.e. when for each n = 1,. , $\ for which the coefficient is not zero, we
have r = 0; and for each r* = 1, , fa for which the coefficient is not
zero, we have T/ TI = 0.
Summing up;
(17 :D) belongs to A and rj belongs to B if and only if these
are true:
0t
For each n = 1, , 0i, for which JC(TI, T Z )rj Tl does
r,l
not assume its maximum (in n) we have TI = 0.
0i
For each r 2 = 1, , ft, for which OC(ri, T 2 ) ri does
ni
not assume its minimum (in r 2 ) we have r; Tj = 0.
It is easy to formulate these principles verbally. They express this: If
, 77 are good mixed strategies, then excludes all strategies T\ which are
not optimal (for player 1) against t\ , and rj excludes all strategies r 2 which
are not optimal (for player 2) against ; i.e. , rj are as was to be
expected optimal against each other.
17.9.2. Another remark which may be made at this point is this:
(17 :E) The game is specially strictly determined if and only if there
exists for each player a good strategy which is a pure strategy.
In view of our past discussions, and particularly of the process of gen
eralization by which we passed from pure strategies to mixed strategies,
this assertion may be intuitively convincing. But we shall also supply a
mathematical proof, which is equally simple. This is it :
We saw in the last part of 17.5.2. that both Vi and v\ obtain by applying
fe *
Max> to Min Tj 2) & ( T I> T*)^* only with different domains for : The set
* \i
of all 5 T i (TI = 1, , 00 for YI, and all of 8^ for vi; i.e. the pure strategies
in the first case, and the mixed ones in the second. Hence Vi = v\, i.e.
the two maxima are equal if and only if the maximum of the second domain
is assumed (at least once) within the first domain. This means by (17 :D)
above that (at least) one pure strategy must belong to A, i.e. be a good one.
I.e.
(17:F:a) Vi = vi if and only if there exists for player 1 a good
strategy which is a pure strategy.
162 ZEROSUM TWOPERSON GAMES: THEORY
Similarly:
(17:F:b) V2 = v if and only if there exists for player 2 a good
strategy which is a pure strategy.
Now vj = v' 2 v' and strict determinateness means Vi = V2 = v',
i.e. vi = v; and v 2 = v. So (17:F:a), (17:F:b) give together (17:E).
17.10. Mistakes and Their Consequences. Permanent Optimality
17.10.1. Our past discussions have made clear what a good mixed
strategy is. Let us now say a few words about the other mixed strategies.
We want to express the distance from " goodness" for those strategies (i.e.
vectors { , rj ) which are not good; and to obtain some picture of the conse
quences of a mistake i.e. of the use of a strategy which is not good. How
ever, we shall not attempt to exhaust this subject, which has many intriguing
ramifications.
For any in S0 i and any ?? in S0 2 we form the numerical functions
(17:13:a) a(7) = V  Min K(T, 7),
(17:13:b) 0() = Max K(, )  V.
By the lemma (17 :A) of 17.5.2. equally
> *
(17:13**) ( O V  Min Tj % 3C(n, T,)$ TI>
T t l
 *
(17:13:b*) ft( *) = Max,, V jc(n, r^  v ; .
r,l
The definition
v' = Max Min K(7, 7) = Min Max K(7, 7)
guarantees that always
(7) ^ 0, 0(7) ^ 0.
And now (17:B:a), (17:B:b) and (17:C:a), (17:C:b) in 17.8. imply that 7
is good if and only if a( ) =0, and ij is good if and only if 0( 77 ) =0.
> > >
Thus a( ^)jft(r)) are convenient numerical measures for the general { , 77
expressing their distance from goodness. The explicit verbal formulation
>
of what a( ), 0( t\ ) are, makes this interpretation even more plausible: The
formulae (17:13:a), (17:13:b) or (17:13:a*), (17:13:b*) above make clear
MIXED STRATEGIES. THE SOLUTION 163
how much of a loss the player risks relative to the value of a play for him 1
by using this particular strategy. We mean here "risk" in the sense
of the worst that can happen under the given conditions. 2
>
It must be understood, however, that a( ( ), 0( i? ) do not disclose which
strategy of the opponent will inflict this (maximum) loss upon the player who
>
is using { or rj . It is, in particular, not at all certain that if the opponent
uses some particular good strategy, i.e. an 17 in 5 or a o in A, this in itself
>
implies the maximum loss in question. If a (not good) or rj is used by the
player, then the maximum loss will occur for those 17 ' or { ' of the opponent,
for which
K(, ') = Min 7 K(, ),
K(7, 7) = Max K(7, 7),
i.e. if 17 ' is optimal against the given , or ' optimal against the given 17 .
>
And we have never ascertained whether any fixed 17 o or { can be optimal
^
against all or 77 .
17.10.2. Let us therefore call an TJ ' or a ' which is optimal against all
7 or 7 i.e. which fulfills (17:14:a), or (17:14:b) in 17.10.1. for all 7, 7~
permanently optimal. Any permanently optimal 17 ' or ' is necessarily
good; this should be clear conceptually and an exact proof is easy. 8 But
1 I.e. we mean by loss the value of the play minus the actual outcome: v' K( , q )
> > >
for player 1 and (v')  (~K( , 77 )) = K( , 17 )  v' for player 2.
' 2 Indeed, using the previous footnote and (17:13:a), (17:13:b)
(7)  V  Min. K(T, 7) = Max (v'  K(7, 7)1,
1? 1
3(7) = Max K(7, 7)  V = Max K(7, 7)  v'.
I.e. each is a maximum loss.
8 Proof: It suffices to show this for rj ' ; the proof for ' is analogous.
Let 77 'be permanently optimal. Choose a * which is optimal against 17', i.e. with
K(T',V) = MaxK(7, 7)
By definition
K(7*. 7) = Min K(7*. 7).
If
>
Thus  *, 17' is a saddle point of K( , 17 ) and therefore V belongs to B i.e. it is good
by (17:C:f) in 17.8.2.
164 ZEROSUM TWOPERSON GAMES: THEORY
the question remains: Are all good strategies also permanently optimal?
And even: Do any permanently optimal strategies exist?
In general the answer is no. Thus in Matching Pennies or in Stone,
Paper, Scissors, the only good strategy (for player 1 as well as for player 2) is
^
f = rj = (i, i) or {i, , i} , respectively. 1 If player 1 played differently
e.g. always " heads " 2 or always " stone" 2 then he would lose if the oppo
nent countered by playing " tails" 3 or "paper." 3 But then the opponent's
strategy is not good i.e. {, ?} or (i, i, }, respectively either. If the
opponent played the good strategy, then the player's mistake would not
matter. 4
We shall get another example of this in a more subtle and complicated
way in connection with Poker and the necessity of " bluffing," in 19.2 and
19.10.3.
All this may be summed up by saying that while our good strategies
are perfect from the defensive point of view, they will (in general) not get
the maximum out of the opponent's (possible) mistakes, i.e. they are not
calculated for the offensive.
It should be remembered, however, that our deductions of 17.8. are
nevertheless cogent; i.e. a theory of the offensive, in this sense, is not
possible without essentially new ideas. The reader who is reluctant to
accept this, ought to visualize the situation in Matching Pennies or in
Stone, Paper, Scissors once more; the extreme simplicity of these two
games makes the decisive points particularly clear.
Another caveat against overemphasizing this point is: A great deal
goes, in common parlance, under the name of "offensive," which is not at
all "offensive" in the above sense, i.e. which is fully covered by our pres
ent theory. This holds for all games in which perfect information prevails,
as will be seen in 17. 10.3. 6 Also such typically "aggressive" operations (and
which are necessitated by imperfect information) as "bluffing" in Poker. 6
17.10.3. We conclude by remarking that there is an important class of
(zerosum twoperson) games in which permanently optimal strategies
exist. These are the games in which perfect information prevails, which
we analyzed in 15. and particularly in 15.3.2., 15.6., 15.7. Indeed, a small
modification of the proof of special strict determinateness of these games, as
given loc. cit., would suffice to establish this assertion too. It would give
permanently optimal pure strategies. But we do not enter upon these con
siderations here.
1 Cf. 17.1. Any other probabilities would lead to losses when " found out." Cf. below.
'This is = * ' (1, 0) or (1, 0, 0), respectively.
* This is i?  S"  (0, 1) or {0, 1, 0), respectively.
4 I.e. the bad strategy of " heads " (or "stone 1 ') can be defeated only by "tails" (or
"paper"), which is just as bad in itself.
1 Thus Chess and Backgammon are included.
The preceding discussion applies rather to the failure to "bluff." Cf. 19.2. and
19.10.3.
MIXED STRATEGIES. THE SOLUTION 165
Since the games in which perfect information prevails are always spe
cially strictly determined (cf. above), one may suspect a more fundamental
connection between specially strictly determined games and those in which
permanently optimal strategies exist (for both players). We do not intend
to discuss these things here any further, but mention the following facts
which are relevant in this connection:
(17 :G :a) It can be shown that if permanently optimal strategies exist
(for both players) then the game must be specially strictly
determined.
(17:G:b) It can be shown that the converse of (17:G:a) is not true.
(17:G:c) Certain refinements of the concept of special strict deter
minateness seem to bear a closer relationship to the existence
of permanently optimal strategies.
17.11. The Interchange of Players. Symmetry
17.11.1. Let us consider the role of symmetry, or more generally the
effects of interchanging the players 1 and 2 in the game F. This will
naturally be a continuation of the analysis of 14.6.
As was pointed out there, this interchange of the players replaces the
function OC(T!, r 2 ) by 3C(r 2 , n). The formula (17:2) of 17.4.1. and 17.G.
>
shows that the effect of this for K( { , TJ ) is to replace it by K( r; , { ). In
the terminology of 16.4.2., we replace the matrix (of 5C(ri, r 2 ) cf. 14.1.3.) by
its negative transposed matrix.
Thus the perfect analogy of the considerations in 14. continues; again
we have the same formal results as there, provided that we replace
TI, r 2 , 3C(ri, T 2 ) by , rj , K( f , T; ). (Cf. the previous occurrence of this
in 17.4. and 17.8.)
We saw in 14.6. that this replacement of 3C(ri, r 2 ) by 3C(r 2 , n) carries
Vi, v 2 into v 2 , VL A literal repetition of those considerations shows
now that the corresponding replacement of K( , 77 ) by K( 17 , )
carries vj, v 2 into v' 2 , vj. Summing up: Interchanging the players 1, 2,
carries Vi, v 2 , v' lt v' 2 into v 2 , Vi, v' 2 , v' t .
The result of 14.6. established for (special) strict determinateness was
that v = Vi = v 2 is carried into v = Vi = v 2 . In the absence of
that property no such refinement of the assertion was possible.
At the present we know that we always have general strict deter
minateness, so that v' = v'i = v 2 . Consequently this is carried into
Verbally the content of this result is clear: Since we succeeded in defining
a satisfactory concept of the value of a play of F (for the player 1), v', it is
only reasonable that this quantity should change its sign when the roles
of the players are interchanged.
17.11.2. We can also state rigorously when the game F is symmetric.
This is the case when the two players 1 and 2 have precisely the same
166 ZERO SUM TWOPERSON GAMES: THEORY
role in it, i.e. if the game F is identical with that game which obtains
from it by interchanging the two players 1, 2. According to what was said
above, this means that
JC(n, TI) = JC(r, TI),
or equivalently that
K(T,7) = x(7,7).
This property of the matrix 3C(ri, r 2 ) or of the bilinear form K( { , rj ) was
introduced in 16.4.4. and called skewsymmetry. 1 ' 2
In this case Vi, v 2 must coincide with v 2 , Vi; hence Vi = v 2 ,
and since Vi ^ v 2 , so Vi ^ 0. But v' must coincide with v'; therefore
we can even assert that
v' = O. 8
So we see: The value of each play of a symmetrical game is zero.
It should be noted that the value v' of each play of a game F could be
zero without F being symmetric. A game in which v' = will be called
fair.
The examples of 14.7.2., 14.7.3. illustrate this: Stone, Paper, Scissors is
symmetric (and hence fair); Matching Pennies is fair (cf. 17.1.) without
being symmetric. 4
>
1 For a matrix 3C(n, r 2 ) or for the corresponding bilinear form K( , 77 ) symmetry
is defined by
3C(n, T j) = X(r 2 , TI),
or equivalently by
It is remarkable that symmetry of the game F is equivalent to skewsymmetry, and not to
symmetry, of its matrix or bilinear form.
1 Thus skewsymmetry means that a reflection of the matrix scheme of Fig. 15 in
14.1.3. on its main diagonal (consisting of the fields (1, 1), (2, 2), etc.) carries it into its
own negative. (Symmetry, in the sense of the preceding footnote, would mean that it
carries it into itself.)
Now the matrix scheme of Fig. 15 is rectangular; it has 2 columns and /3i rows. In
the case under consideration its shape must be unaltered by this reflection. Hence it
must be quadratic, i.e. 0i = # 2 . This is so, however, automatically, since the players
1, 2 are assumed to have the same role in r.
3 This is, of course, due to our knowing that v\ = v 2 . Without this i.e. without
the general theorem (16:F) of 16.4.3. we should assert for the vj, v 2 only the same which
we obtained above for the Vi, v 2 :vj = vj and since vj ^ v^, sov\ ^ 0.
4 The players 1 and 2 have different roles in Matching Pennies: 1 tries to match,
and 2 tries to avoid matching. Of course, one has a feeling that this difference is inessen
tial and that the fairness of Matching Pennies is due to this inessentiality of the assyme
try. This could be elaborated upon, but we do not wish to do this on this occasion. A
better example of fairness without symmetry would be given by a game which is grossly
unsymmetric, but in which the advantages and disadvantages of each player are so
judiciously adjusted that a fair game i.e. value v' = results.
A not altogether successful attempt at such a game is the ordinary way of " Rolling
Dice." In this game player 1 the " player " rolls two dice, each of which bear the
numbers 1, , 6. Thus at each roll any total 2, ,12 may result. These totals
MIXED STRATEGIES. THE SOLUTION 167
In a symmetrical game the sets A, S of (17:B:a), (17:B:b) in 17.8. are
obviously identical. Since A = B we may put = T\ in the final criterion
(17:D) of 17.9. We restate it for this case:
(17:H) In a symmetrical game, belongs to A if and only if this is
(*i
true: For each r 2 = 1, 0* for which ^ 3C(ri, T 2 ) ri does
r,l
not assume its minimum (in r 2 ) we have T = 0.
Using the terminology of the concluding remark of 17.9., we see that the
above condition expresses this: { is optimal against itself .
17.11.3. The results of 17.11.1., 17.11.2. that in every symmetrical
game v' = can be combined with (17:C:d) in 17.8. Then we obtain
this:
(17:1) In a symmetrical game each player can, by playing appropri
ately, avoid loss 1 irrespective of what the opponent does.
We can state this mathematically as follows :
If the matrix 3C(n, r 2 ) is skewsymmetric, then there exists a vector
n
with
for
r 2 =
This could also have been obtained directly, because it coincides with
the last result (16 :G) in 16.4.4. To see this it suffices to introduce there
our present notations: Replace the i, j, a(i, j) there by our r\ t r 2 , X(n, r 2 )
and the w there by our { .
have the following probabilities:
Total .....
2
3
4
5
6
7
8
9
10
11
12
Chance out of 36
1
2
3
4
5
6
5
4
3
2
1
Probability
A
A
A
A
A
A
A
A
A
A
A
The rule is that if the " player " rolls 7 or 11 (''natural") he wins. If he rolls 2, 3, or 12
he loses. If he rolls anything else (4, 5, 6, or 8, 9, 10) then he rolls again until he rolls
either a repeat of the original one (in which case he wins), or a 7 (in which case he loses).
Player 2 (the "house") has no influence on the play.
In spite of the great differences of the rules as they affect players 1 and 2 (the
"player" and the "house") their chances are nearly equal: A simple computation, which
we do not detail, shows that the "player" has 244 chances against 251 for the "house,"
out of a total of 495; i.e. the value of a play played for a unit stake is
244  251
495
 1.414%.
Thus the approximation to fairness is reasonably good, and it may be questioned whether
more was intended.
1 I.e. secure himself a gain * 0.
168 ZEROSUM TWOPERSON GAMES: THEORY
It is even possible to base our entire theory on this fact, i.e. to derive
the theorem of 17.6. from the above result. In other words: The general
strict determinateness of all F can be derived from that one of the symmetric
ones. The proof has a certain interest of its own, but we shall not discuss
it here since the derivation of 17.6. is more direct.
The possibility of protecting oneself against loss (in a symmetric game)
exists only due to our use of the mixed strategies , ij (cf. the end of 17.7.).
If the players are restricted to pure strategies TI, r 2 then the danger of having
one's strategy found out, and consequently of sustaining losses, exists.
To see this it suffices to recall what we found concerning Stone, Paper,
Scissors (cf. 14.7. and 17.1.1.). We shall recognize the same fact in con
nection with Poker and the necessity of "bluffing" in 19.2.1.
CHAPTER IV
ZEROSUM TWOPERSON GAMES: EXAMPLES
18. Some Elementary Games
18.1. The Simplest Games
18.1.1. We have concluded our general discussion of the zerosum two
person game. We shall now proceed to examine specific examples of such
games. These examples will exhibit better than any general abstract
discussions could, the true significance of the various components of our
theory. They will show, in particular, how some formal steps which are
dictated by our theory permit a direct commonsense interpretation. It
will appear that we have here a rigorous formalization of the main aspects
of such "practical" and "psychological" phenomena as those to be men
tioned in 19.2., 19.10. and 19.16. 1
18.1.2. The size of the numbers 0i, 2 i.e. the number of alternatives
confronting the two players in the normalized form of the game gives a
reasonable first estimate for the degree of complication of the game F.
The case that either, or both, of these numbers is 1 may be disregarded: This
would mean that the player in question has no choice at all by which he can
influence the game. 2 Therefore the simplest games of the class which
interests us are those with
(18:1) 0i = fa = 2.
We saw in 14.7. that Matching Pennies is such a game; its matrix scheme
was given in Figure 12 in 13.4.1. Another instance of such a game is Figure
14, id.
1
2
1
3C(1, 1)
3C(1,2)
2
3C(2, 1)
3C(2, 2)
Figure 27.
Let us now consider the most general game falling under (18:1), i.e.
under Figure 27. This applies, e.g., to Matching Pennies if the various ways
of matching do not necessarily represent the same gain (or a gain at all),
1 We stress this because of the widely held opinion that these things are congenitally
unfit for rigorous (mathematical) treatment.
1 Thus the game would really be one of one person; but then, of course, no longer of
zero sum. Cf. 12,2.
169
170 ZEROSUM TWOPERSON GAMES: EXAMPLES
nor the various ways of not matching the same loss (or a loss at all). 1 We
propose to discuss for this case the results of 17.8., the value of the game F
and the sets of good strategies 4, 5. These concepts have been established
by the general existential proof of 17.8. (based on the theorem of 17.6.);
but we wish to obtain them again by explicit computation in this special
case, and thereby gain some further insight into their functioning and their
possibilities.
18.1.3. There are certain trivial adjustments which can be made on
a game given by Figure 27, and which simplify an exhaustive discussion
considerably.
First it is quite arbitrary which of the two choices of player 1 we denote
by TI = 1 and by TI = 2; we may interchange these, i.e. the two rows of
the matrix.
Second, it is equally arbitrary which of the two choices of player 2 we
denote by TI = 1 and by r 2 = 2; we may interchange these, i.e. the two
columns of the matrix.
Finally, it is also arbitrary which of the two players we call 1 and which
2; we may interchange these, i.e. replace 5C(ri, r 2 ) by 3C(ri, r 2 ) (cf. 14.6.
and 17.11.). This amounts to interchanging the rows and the columns
of the matrix, and changing the sign of its elements besides.
Putting everything together, we have here 2X2X2 = 8 possible
adjustments, all of which describe essentially the same game.
18.2. Detailed Quantitative Discussion of These Games
18.2.1. We proceed now to the discussion proper. This will consist
in the consideration of several alternative possibilities, the "Cases" to be
enumerated below.
These Cases are distinguished by the various possibilities which exist
for the positions of those fields of the matrix where X(TI, r 2 ) assumes its
maximum and its minimum for both variables n, r 2 together. Their
delimitations may first appear to be arbitrary; but the fact that they
lead to a quick cataloguing of all possibilities justifies them ex post.
Consider accordingly Max VTj 3C(ri, r 2 ) and Min VTj JC(ri, r 2 ). Each
one of these values will be assumed at least once and might be assumed more
than once; 2 but this does not concern us at this juncture. We begin now
with the definition of the various Cases :
18.2.2. Case (A): It is possible to choose a field where Max VTj is
assumed and one where Min Ti , Tj is assumed, so that the two are neither in
the same row nor in the same column.
By interchanging TI = 1, 2 as well as r 2 = 1, 2 we can make the first
mentioned field (of Max Ti , Tj ) to be (1, 1). The secondmentioned field
1 Comparison of Figs. 12 and 27 shows that in Matching Pennies 3C(1, 1)  3C(2, 2)  1
(gain on matching) ;3C(1, 2)  3C(2, 1)  1 (loss on not matching).
2 In Matching Pennies (cf. footnote 1 above) the Max Tj , ri is 1 and is assumed at
(1, 1) and (2, 2), while the Min V r f is 1 and is assumed at (1, 2) and (2, 1).
SOME ELEMENTARY GAMES 171
(of Min ri . r) ) must then be (2, 2). Consequently we have
(18:2)
Therefore (1, 2) is a saddle point. 1
Thus the game is strictly determined in this case ajid
(18:3) v' = v = 3C(1, 2).
18.2.3. Case (B): It is impossible to make the choices as prescribed
above:
Choose the two fields in question (of Max Ti , Tj and Min ri , Tj ); then they
are in the same row or in the same column. If the former should be the case,
then interchange the players 1, 2, so that these two fields are at any rate
in the same column. 2
By interchanging TI = 1, 2 as well as r 2 = 1, 2 if necessary, we can again
make the firstmentioned field (of Max VT2 ) to be (1, 1). So the column in
question is r 2 = 1. The secondmentioned field (of Min ri , Tj ) must then be
(2, I). 3 Consequently we have:
Actually 3C(1, 1) = JC(1, 2) or JC(2, 2) = 3C(2, 1) are excluded because
for the Max Tj , rj and Min T , Tj fields they' would permit the alternative choices
of (1, 2), (2, 1) or (1, 1), (2, 2), thus bringing about Case (A). 4
So we can strengthen (18:4) to
(18:5)
We must now make a further disjunction:
18.2.4. Case(Bi):
(18:6) 3C(1, 2) OC(2, 2)
Then (18:5) can be strengthened to
(18:7) 3C(1, 1) > 3C(1, 2) ^ OC(2, 2) > 3C(2, 1).
Therefore (1, 2) is again a saddle point.
Thus the game is strictly determined in this case too; and again
(18:8) V = v =3C(1, 2).
1 Recall 13.4.2. Observe that we had to take (1, 2) and not (2, 1).
2 This interchange of the two players changes the sign of every matrix element (cf .
above), hence it interchanges Max Ti ,T 2 and Min Ti>Tj . But they will nevertheless be in
the same column.
3 To be precise: It might also be (1, 1). But then 3C(n, r,) has the same Max Tl , Ts and
Min r .T , and so it is a constant. Then we can use (2, 1) also for Min rj>Tj .
<3C(1, 1)  3C(2, 2) and JC(1, 2)  3C(2, 1) are perfectly possible, as the example of
Matching Pennies shows. Cf. footnote 1 on p. 170 and footnote 1 on p. 172.
172 ZEROSUM TWOPERSON GAMES: EXAMPLES
18.2.6. Case (B 2 ):
(18:9) 3C(1, 2) < 3C(2, 2)
Then (18:5) can be strengthened to
(18:10) 3C(1, 1) ^ 3C(2, 2) > 3C(l, 2) JC(2, I). 1
The game is not strictly determined. 2
It is easy however, to find good strategies, i.e. a { in A and an rj in B, by
satisfying the characteristic condition (17 :D) of 17.9. We can do even more :
> 2 
We can choose 77 so that ^ 3C(ri, TJ)T; TI is the same for all r\ and so that
r,l
2
2} 3C(fi, T 2 ) ri is the same for all r*. For this purpose we need:

U *' 11; (30(1, l){i +3C(2, 1){, =5C(1, 2){i +3C(2, 2) fa.
This means
)  3C(2, 1
) 3C(1,2):OC(1, 1)JC(2, 1).
We must satisfy these ratios, subject to the permanent requirements
fa 0, fa ^ fa + fa = 1
i?i S 0, r/2 ^ r?i + 172 = 1
This is possible because the prescribed ratios (i.e. the righthand sides in
(18:12)) are positive by (18:10). We have
, _ 3C(2, 2)  3C(2, 1) _
_ _
" 3C(1, 1) + 5C(2, 2)  3C(1, 2)  3C(2, 1)'
> ^ _ X(l, 1) 30(1, 2) _
** 3C(1, 1) + 3C(2, 2)  3C(1, 2)  JC(2, 1)*
and further
_ OC(2, 2)  3C(1, 2) _
171 3C(1, 1) + 3C(2, 2)  3C(1, 2)  3C(2, 1)'
_ _ ge(i, i)ae(2, i) _
772 5C(1, 1) + 3C(2, 2)  3C(1, 2)  JC(2, 1)'
We can even show that these , 17 are unique, i.e. that A, B possess no other
elements.
1 This is actually the case for Matching Pennies. Cf . footnotes i on p. 170 and 4
on p. 171.
* Clearly vi  Max Ti Min f 5C(n, r t )  3C(1, 2), v,  Mm Tj Max Tl 3C(r,, r t )  JC(2, 2),
80 V t < Vj.
SOME ELEMENTARY GAMES 173
Proof : If either or rj were something else than we found above, then
77 or respectively must have a component 0, owing to the characteristic
condition (17 :D) of 17.9. But then rj or { would differ from the above
values since in these both components are positive. So we see: If either
> >
or 77 differs from the above values, then both do. And then both must
have a component 0. For both the other component is then 1, i.e. both are
coordinate vectors. 1 Hence the saddle point of K( , T; ) which they repre
sent is really one of 3C(ri, r 2 ), cf. (17 :E) in 17.9. Thus the game would be
strictly determined; but we know that it is not in this case.
This completes the proof.
All four expressions in (18:11) are now seen to have the same value,
namely
3C(1, 1)3C(2, 2)  3C(1, 2)3C(2, 1)
JC(1, 1) + 3C(2, 2)  JC(1, 2)  X(2, 1)
and by (17:5:a), (17:5:b) in 17.5.2. this is the value of v'. So we have
nann v'  3C(1, 1)3C(2, 2)  3C(1, 2)OC(2, 1)
^10. 10, v x( ^ ^ + ^ 2) _ ^ 2) _ ^ l y
18.3. Qualitative Characterizations
18.3.1. The formal results in 18.2. can be summarized in various ways
which make their meaning clearer. We begin with this criterion:
The fields (1, 1), (2, 2) form one diagonal of the matrix scheme of Fig. 27,
the fields (1, 2), (2, 1) form the other diagonal.
We say that two sets of numbers E and F are separated either if every
element of E is greater than every element of F, or if every element of E
is smaller than every element of F.
Consider now the Cases (A), (Si), (B 2 ) of 18.2. In the first two cases
the game is strictly determined and the elements on one diagonal of the
matrix are not separated from those on the other. 2 In the last case the
game is not strictly determined, and the elements on one diagonal of the
matrix are separated from those on the other. 8
Thus separation of the diagonals is necessary and sufficient for the game
not being strictly determined. This criterion was obtained subject to
the use made in 18.2. of the adjustments of 18.1.3. But the three processes
of adjustment described in 18.1.3. affect neither strict determinateiiess
nor separation of the diagonals. 4 Hence our first criterion is always valid.
We restate it:
Ml, 0 or (0,1).
2 Case (A):3C(1, 1) OC(1, 2) ^ 3C(2, 2) by (18:2). Case (Bi): JC(1, 1) > JC(1, 2)
^ 3C(2, 2) by (18:7).
8 Case (Bi) : 3C(1, 1) JC(2, 2) > 3C(1, 2) OC(2, 1) by (18:10).
4 The first is evident since these are only changes in notation, inessential for the game.
The second is immediately verified.
174 ZEROSUM TWOPERSON GAMES: EXAMPLES
(18 :A) The game is not strictly determined if and only if the elements
on one diagonal of the matrix are separated from those on the
other.
18.3.2. In case (# 2 ), i.e. when the game is not strictly determined, both
the (unique) { of A and the (unique) iy of B which we found, have both com
ponents 5** 0. This, as well as the statement of uniqueness, is unaffected
by adjustments described in 18.1.3. 1 So we have:
(18:B) If the game is not strictly determined, then there exists only
> 
one good strategy { (i.e. in A) and only one good strategy 77
(i.e. in 5), and both have both their components positive.
I.e. both players must really resort to mixed strategies.
According to (18 :B) no component of or rj ( in A, vj in B) is zero.
Hence the criterion of 17.9. shows that the argument which preceded
(18:11) which was then sufficient without being necessary is now
necessary (and sufficient). Hence (18:11) must be satisfied, and therefore
all of its consequences are true. This applies in particular to the values
$1, {2, i?i, 772 given after (18:11), and to the value of v' given in (18:13).
All these formulae thus apply whenever the game is not strictly determined.
18.3.3. We now formulate another criterion:
In a general matrix 3C(ri, r 2 ) cf. Fig. 15 on p. 99 (we allow for a
moment any 0i, 2 ) we say that a row (say T() or a column (say r' 2 ) majorizes
another row (say r") or column (say r' 2 '), respectively, if this is true for their
corresponding elements without exception. I.e. if 3C(rJ, r 2 ) ^ 3C(r7, r 2 )
for all T 2 , or if 3C(n, T{) ^ 3C(rj, /,') for all r*.
This concept has a simple meaning: It means that the choice of r(
is at least as good for player 1 as that of r" or that the choice of r( is at
most as good for player 2 as that of r' 2 ' and that this is so in both cases
irrespective of what the opponent does. 2
Let us now return to our present problem (/?i = 2 = 2). Consider
again the Cases (A), (J5 t ), (/? 2 ) of 18.2. In the first two cases a row or a
column majorizes the other. 8 In the last case neither is true. 4
Thus the fact that a row or a column majorizes the other is necessary and
sufficient for T being strictly determined. Like our first criterion this is
subject to the use made in 18.2. of the adjustments made in 18. 1.3. And, as
there, those processes of adjustment affect neither strict determinateness
nor majorization of rows or columns. Hence our present criterion too is
always valid. We restate it:
(18 :C) The game r is strictly determined if and only if a row or a
column majorizes the other.
1 These too are immediately verified.
'This is, of course, an exceptional occurrence: In general the relative merits of two
alternative choices will depend on what the opponent does.
'Case (A): Column 1 majorizes column 2, by (18:2) Case (Bi): Row 1 majorizes
row 2 by (18:7).
4 Case (J?t): (18:10) excludes all four possibilities, as is easily verified.
SOME ELEMENTARY GAMES
175
18.3.4. That the condition of ( 18 :C) is sufficient for strict determinateness
is not surprising: It means that for one of the two players one of his possible
choices is under all conditions at least as good as the other (cf. above). Thus
he knows what to do and his opponent knows what to expect, which is likely
to imply strict determinateness.
Of course these considerations imply a speculation on the rationality
of the behavior of the other player, from which our original discussion is
free. The remarks at the beginning and at the end of 15.8. apply to a
certain extent to this, much simpler, situation.
What really matters in this result (18 :C) however is that the necessity
of the condition is also established; i.e. that nothing more subtle than
outright majorization of rows or columns can cause strict determinateness.
It should be remembered that we are considering the simplest possible
case: 0i = 2 = 2. We shall see in 18.5. how conditions get more involved
in all respects when pi, fa increase.
18.4. Discussion of Some Specific Games. (Generalized Forms of Matching Pennies)
18.4.1. The following are some applications of the results of 18.2. and
18,3.
(a) Matching Pennies in its ordinary form, where the JC matrix of Figure
27 is given by Figure 12 on p. 94. We know that this game has the value
v' =
and the (unique) good strategies
7=7= {*,*}
(Cf. 17.1. The formulae of 18.2. will, of course, give this immediately.)
18.4.2. (b) Matching Pennies, where matching on heads gives a double
premium. Thus the matrix of Figure 27 differs from that of Figure 12 by
the doubling of its (1, 1) element:
1
2
1
(\
1
2
1
1
Figure 28o.
The diagonals are separated (1 and 2 are > than 1), hence the good strate
gies are unique and mixed (cf. (18: A), (18:B)). By using the pertinent
formulae of case (#2) in 18.2.5., we obtain the value
and the good strategies
It will be observed that the premium put on matching heads has increased
the value of a play for player 1 who tries to match. It also causes him to
176
ZEROSUM TWOPERSON GAMES: EXAMPLES
choose heads less frequently, since the premium makes this choice plausible
and therefore dangerous. The direct threat of extra loss by being matched
on heads influences player 2 in the same way. This verbal argument has
some plausibility but is certainly not stringent. Our formulae which yielded
this result, however, were stringent.
18.4.3. (c) Matching Pennies, where matching on heads gives a double
premium but failing to match on a choice (by player 1) of heads gives a
triple penalty. Thus the matrix of Figure 27 is modified as follows:
1
2
1
2
3
2
1
1
Figure 286.
The diagonals are separated (1 and 2, are > than 1, 3), hence the good
strategies are unique and mixed (cf. as before). The formulae used before
give the value
v = 
and the good strategies
We leave it to the reader to formulate a verbal interpretation of this
result, in the same sense as before. The construction of other examples
of this type is easy along the lines indicated.
18.4.4. (d) We saw in 18.1.2. that these variants of Matching Pennies
are, in a way, the simplest forms of zerosum twoperson games. By this
circumstance they acquire a certain general significance, which is further
corroborated by the results of 18.2. and 18.3.: indeed we found there that
this class of games exhibits in their simplest forms the conditions under
which strictly and notstrictly determined cases alternate. As a further
addendum in the same spirit we point out that the relatedness of these
games to Matching Pennies stresses only one particular aspect. Other
games which appear in an entirely different material garb may, in reality,
well belong to this class. We shall give an example of this:
The game to be considered is an episode from the Adventures of Sherlock
Holmes. 1  2
1 Conan Doyle: The Adventures of Sherlock Holmes, New York, 1938, pp. 550551.
1 The situation in question is of course again to be appraised as a paradigm of many
possible conflicts in practical life. It was expounded as such by 0. Morgenstern: Wirt
schaftsprognose, Vienna, 1928, p. 98.
The author does not maintain, however, some pessimistic views expressed id. or in
" Vollkommene Voraussicht und wirtschaftliches Gleichgewicht," Zeitschrift fur Nation
alokonomie, Vol. 6, 1934.
Accordingly our solution also answers doubts in the same vein expressed by K.
Menger: Neuere Fortschritte in den exacten Wissenschaften, "Einige neuere Fort
schritte in der exacten Behandlung Socialwissenschaftlicher Probleme," Vienna, 1936,
pp. 117 and 131.
SOME ELEMENTARY GAMES
177
Sherlock Holmes desires to proceed from London to Dover and hence
to the Continent in order to escape from Professor Moriarty who pursues
him. Having boarded the train he observes, as the train pulls out, the
appearance of Professor Moriarty on the platform. Sherlock Holmes
takes it for granted and in this he is assumed to be fully justified that
his adversary, who has seen him, might secure a special train and overtake
him. Sherlock Holmes is faced with the alternative of going to Dover or
of leaving the train at Canterbury, the only intermediate station. His
adversary whose intelligence is assumed to be fully adequate to visualize
these possibilities has the same choice. Both opponents must choose the
place of their detrainment in ignorance of the other's corresponding decision.
If, as a result of these measures, they should find themselves, in fine, on
the same platform, Sherlock Holmes may with certainty expect to be killed
by Moriarty. If Sherlock Holmes reaches Dover unharmed he can make
good his escape.
What are the good strategies, particularly for Sherlock Holriies? This
game has obviously a certain similarity to Matching Pennies, Professor
Moriarty being the one who desires to match. Let him therefore be
player 1, and Sherlock Holmes be player 2. Denote the choice to proceed to
Dover by 1 and the choice to quit at the intermediate station by 2. (This
applies to both r\ and T2.)
Let us now consider the X matrix of Figure 27. The fields (1, 1) and
(2, 2) correspond to Professor Moriarty catching Sherlock Holmes, which it
is reasonable to describe by a very high value of the corresponding matrix
element, say 100. The field (2, 1) signifies that Sherlock Holmes suc
cessfully escaped to Dover, while Moriarty stopped at Canterbury. This is
Moriarty 's defeat as far as the present action is concerned, and* should be
described by a big negative value of the matrix element in the order of
magnitude but smaller than the positive value mentioned above say, 50.
The field (1, 2) signifies that Sherlock Holmes escapes Moriarty at the
intermediate station, but fails to reach the Continent. This is best viewed
as a tie, and assigned the matrix element 0.
The 5C matrix is given by Figure 29:
1
2
1
100
2
50
100
Figure 29.
As in (b), (c) above, the diagonals are separated (100 is > than 0, 50);
hence the good strategies are again unique and mixed. The formulae
used before give the value (for Moriarty)
40
178 ZEROSUM TWOPERSON GAMES: EXAMPLES
and the good strategies ( for Moriarty, rj for Sherlock Holmes) :
7 = {*,!}, 7 = {*,D.
Thus Moriarty should go to Dover with a probability of 60%, while
Sherlock Holmes should stop at the intermediate station with a probability
of 60 %, the remaining 40 % being left in each case for the other alternative. l
18.5. Discussion of Some Slightly More Complicated Games
18.6.1. The general solution of the zerosum twoperson game which we
obtained in 17.8. brings certain alternatives and concepts particularly
into the foreground: The presence or absence of strict determinateness, the
value v' of a play, and the sets A, S of good strategies. For all these we
obtained very simple explicit characterizations and determinations in 18.2.
These became even more striking in the reformulation of those results in
18.3.
This simplicity may even lead to some misunderstandings. Indeed,
the results of 18.2., 18.3. were obtained by explicit computations of the most
elementary sort. The combinatorial criteria of (18 :A), (18:C) in 18.3.
for strict determinateness were at least in their final form also consider
ably more straightforward than anything we have experienced before.
This may give occasion to doubts whether the somewhat involved consider
ations of 17.8. (and the corresponding considerations of 14.5. in the case
of strict determinateness) were necessary, particularly since they are
based on the mathematical theorem of 17.6. which necessitates our analysis
of linearity and convexity in 16. If all this could be replaced by discussions
in the style of 18.2., 18.3. then our mode of discussion of 16. and 17. would be
entirely unjustified. 2
This is not so. As pointed out at the end of 18.3., the great simplicity
of the procedures and results of 18.2. and 18.3. is due to the fact that they
apply only to the simplest type of zerosum twoperson games: the Matching
Pennies class of games, characterized by ft\ = 2 = 2. For the general
case the more abstract machinery of 16. and 17. seems so far indispensable.
1 The narrative of Conan Doyle excusably disregards mixed strategies and states
instead the actual developments. According to these Sherlock Holmes gets out at the
intermediate station and triumphantly watches Moriarty 's special train going on to
Dover. Conan Doyle's solution is the best possible under his limitations (to pure
strategies), insofar as he attributes to each opponent the course which we found to be the
more probable one (i.e. he replaces 60% probability by certainty). It is, however,
somewhat misleading that this procedure leads to Sherlock Holmes's complete victory,
whereas, as we saw above, the odds (i.e. the value of a play) are definitely in favor of
Moriarty. (Our result for { , 17 yields that Sherlock Holmes is as good as 48% dead
when his train pulls out from Victoria Station. Compare in this connection the sugges
tion in Morgenstern, loc. cit., p. 98, that the whole trip is unnecessary because the loser
could be determined before the start.)
'Of course it would not lack rigor, but it would be an unnecessary use of heavy
mathematical machinery on an elementary problem.
SOME ELEMENTARY GAMES
179
It may help to see these things in their right proportions if we show by
some examples how the assertions of 18.2., 18.3. fail for greater values
of 18.
18.5.2. It will actually suffice to consider games with 0i = fa = 3.
In fact they will be somewhat related to Matching Pennies, more general
only by introduction of a third alternative.
Thus both players will have the alternative choices 1, 2, 3 (i.e. the values
for TI, r 2 ). The reader will best think of the choice 1 in terms of choosing
"heads," the choice 2 of choosing "tails" and the choice 3 as something like
"calling off." Player 1 again tries to match. If either player "calls off,"
then it will not matter whether the other player chooses "heads" or "tails,"
the only thing of importance is whether he chooses one of these two at
all or whether he "calls off" too. Consequently the matrix has now the
appearance of Figure 30:
\
1
2
3
1
1
1
7
2
1
1
y
3
a
a
ft
Figure 30.
The four first elements i.e. the first two elements of the first two rows
are the familiar pattern of Matching Pennies (cf. Fig. 12). The two fields
with a are operative when player 1 "calls off" and player 2 does not. The
two elements with 7 are operative in the opposite case. The element with
8 refers to the case where both players " call off." By assigning appropriate
values (positive, negative or zero) we can put a premium or a penalty
on any one of these occurrences, or make it indifferent.
We shall obtain all the examples we need at this juncture by specializing
this scheme, i.e. by choosing the above a, 0, y appropriately.
18.5.3. Our purpose is to show that none of the results (18:A), (18:B),
(18 :C) of 18.3. is generally true.
Ad (18:A): This criterion of strict determinateness is clearly tied to the
special case $\ = fa = 2: For greater values of 0i, 2 the two diagonals do
not even exhaust the matrix rectangle, and therefore the occurrence on the
diagonal alone cannot be characteristic as before.
Ad (18 :B): We shall give an example of a game which is not strictly
determined, but where nevertheless there exists a good strategy which is
pure for one player (but of course not for the other). This example has the
further peculiarity that one of the players has several good strategies, while
the other has only one.
180 ZEROSUM TWOPERSON GAMES: EXAMPLES
We choose in the game of Figure 30 , ft, y as follows:
\
1
2
3
1
1
1
2
1
1
3
a
a
a
Figure 31.
a > 0, 6 > 0. The reader will determine for himself which combinations of
" calling off" are at a premium or are penalized in the previously indicated
sense.
This is a complete discussion of the game, using the criteria of 17.8.
For = {i, i, 0} always K( { , rj ) = 0, i.e. with this strategy player 1
cannot lose. Hence v' jg 0. For t\ = 6 8 = {0, 0, 1} always K( , rj )
^ O; 1 i.e. with this strategy player 2 cannot lose. Hence v' g 0. Thus we
have
v' =
Consequently is a good strategy if and only if always K( { , rj ) ^ and TJ
is a good strategy if and only if always K( , rj ) ^ O. 2 The former is easily
seen to be true if and only if
* i = {2 = i {3 = 0,
and the latter if and only if
5
172
2(a
Thus the set A of all good strategies contains precisely one element,
and this is not a pure strategy. The set B of all good strategies 77 , on the
other hand, contains infinitely many strategies, and one of them is pure:
namely rj = d 8 = {0, 0, 1}.
The sets A, B can be visualized by making use of the graphical repre
sentation of Figure 21 (cf . Figures 32, 33) :
Ad (18 :C): We shall give an example of a game which is strictly deter
mined but in which no two rows and equally no two columns majorize each
other. We shall actually do somewhat more.
18.5.4. Allow for a moment any 0i, /3 2 . The significance of the majoriza
tion of rows or of columns by each other was considered at the end of 18.3.
It was seen to mean that one of the players had a simple direct motive
1 It is actually equal to 3 .
* We leave to the reader the simple verbal interpretation of these statements.
SOME ELEMENTARY GAMES
181
for neglecting one of his possible choices in favor of another, and this
narrowed the possibilities in a way which could be ultimately connected
with strict determinateness.
Specifically: If the row T" is majorized by the row r\ i.e. if
3( T "> T S) ^ 3CM, T 2 ) for all r 2 then player 1 need never consider the
choice Tit since T\ is at least as good for him in every contingency. And:
If the column r 2 ' majorizes the column / 2 i.e. if JC(TI, r' 2 ') ^ 3C(n, r 2 ) for all
n then player 2 need never consider the choice r' 2 ', since r' 2 is at least as good
for him in every contingency. (Cf. loc. cit., particularly footnote 2 on
p. 174. These are of course only heuristic considerations, cf. footnote 1,
p. 182.)
Figure 33.
Now we may use an even more general setup: If the row r", i.e.
the player 1's pure strategy corresponding to r" is majorized by an average
of all rows T\ j T" i.e. by a mixed strategy with the component
T " = then it is still plausible to assume that player 1 need never con
sider the choice of r", since the other T\ are at least as good for him in every
contingency. The mathematical expression of this situation is this:
0i
it T 2 ) ^ y 3C(ri, r 2 ) T for all T 2
n
0.
The corresponding situation for player 2 arises if the column rj' i.e.
player 2 ; s pure strategy corresponding to r 2 ' majorizes an average of all
>
columns r' 2 ^ //, i.e. a mixed strategy ?? with the component t\ r " = 0.
The mathematical expression of this situation is this:
0*
(n, r 2 ; ) Y JC(n, rj)^ for all n
in 8^
= 0.
The conclusions are the analogues of the above.
182
ZEROSUM TWOPERSON GAMES: EXAMPLES
Thus a game in which (18:14:a) or (18:14:b) occurs, permits of an
immediate and plausible narrowing of the possible choices for one of the
players. 1
18.5.5. We are now going to show that the applicability of (18:14:a),
(18:14:b) is very limited: We shall specify a strictly determined game in
which neither (18:14:a) nor (18:14:b) is ever valid.
Let us therefore return to the class of games of Figure 30. (0i = /3 2 = 3).
We choose < a < 1, = 0, y = a:
1
2
3
1
1
1
a
2
1
i
a
3
a
a
Figure 34.
The reader will determine for himself which combinations of " calling off"
are at a premium or are penalized in the previously indicated sense.
This is a discussion of the game :
The element (3, 3) is clearly a saddle point, so the game is strictly
determined and
v = v' = 0.
It is not difficult to see now (with the aid of the method used in 18.5.3.),
that the set A of all good strategies as well as the set B of all good strate
gies 17 , contains precisely one element: the pure strategy d 8 = {0, 0, 1).
On the other hand, the reader will experience little trouble in verifying
that neither (18:14:a) nor (18:14:b) is ever valid here, i.e. that in Figure 34
no row is majorized by any average of the two other rows, and that no
column majorizes any average of the other two columns.
18.6. Chance and Imperfect Information
18.6.1. The examples discussed in the preceding paragraphs make it
clear that the role of chance more precisely, of probability in a game is
not necessarily the obvious one, that which is directly provided for in the
rules of the game. The games described in Figures 27 and 30 have rules
1 This is of course a thoroughly heuristic argument. We do not need it, since we have
the complete discussions of 14.5. and of 17.8. But one might suspect that it can be used
to replace or at least to simplify those discussions. The example which we are going to
give in the text seems to dispel any such hope.
There is another course which might produce results: If (18:14:a) or (18:14:b) holds,
then a combination of it with 17.8. can be used to gain information about the sets of good
strategies, A and B. We do not propose to take up this subject here.
SOME ELEMENTARY GAMES 183
which do not provide for chance; the moves are personal without excep
tion. l Nevertheless we found that most of them are not strictly determined,
i.e. that their good strategies are mixed strategies involving the explicit
use of probabilities.
On the other hand, our analysis of those games in which perfect informa
tion prevails showed that these are always strictly determined, i.e. that
they have good strategies which are pure strategies, involving no probabili
ties at all. (Cf. 15.)
Thus from the point of view of the players' behavior i.e. of the strate
gies to be used the important circumstance is whether the game is strictly
determined or not, and not at all whether it contains any chance moves.
The results of 15. on games in which perfect information prevails
indicate that there exists a close connection between strict determinateness
and the rules which govern the players' state of information. To establish
this point quite clearly, and in particular to show that the presence of
chance moves is quite irrelevant, we shall now show this: In every (zero
sum twoperson) game any chance move can be replaced by a combination
of personal moves, so that the strategical possibilities of the game remain
exactly the same. It will be necessary to allow for rules involving imperfect
information of the players, but this is just what we want to demonstrate:
That imperfect information comprises (among other things) all possible
consequences of explicit chance moves. 2
18.6.2. Let us consider, accordingly, a (zerosum twoperson) game F,
and in it a chance move 3TC*. 3 Enumerate the alternatives as usual by
a K = 1, , a K and assume that their probabilities p ( K l) , , p ( ?** are
all equal to l/ex<. 4 Now replace 3TI, by two personal moves 2fTl^, 9TC".
1 The reduction of all games to the normalized form shows even more: It proves that
every game is equivalent to one without chance moves, since the normalized form con
tains only personal moves.
2 A direct way of removing chance moves exists of course after the introduction of the
(pure) strategies and the umpire's choice, as described in 11.1. Indeed as the last
step in bringing a game into its normalized form we eliminated the remaining chance
moves by the explicit introduction of expectation values in 11.2.3.
But we now propose to eliminate the chance moves without upsetting the structure
of the game so radically. We shall replace each chance move individually by personal
moves (by two moves, as will be seen), so that their respective roles in determining the
players' strategies will always remain differentiated and individually appraisable. This
detailed treatment is likely to give a clearer idea of the structural questions involved than
the summary procedure mentioned above.
3 For our present purposes it is irrelevant whether the characteristics of 9Tl depend
upon the previous course of the play or not.
4 This is no real loss of generality. To see this, assume that the probabilities in
question have arbitrary rational values, say r\/t, , r a /t (r\, , r a and /
integers). (Herein lies an actual restriction but an arbitrary small one since any
probabilities can be approximated by rational ones to any desired degree.)
Now modify the chance move 9fR so that it has r\ +  f r ajc t alternatives
(instead of a,), designated by </ K 1, , t (instead of <r 1,  , a); and so that
each of the first n values of J n has the same effect on the play as <r = 1, each of the
next rj values of a' K the same as * K 2, etc., etc. Then giving all *' K 1, , t the
same probability l/l, has the same effect as giving <r 1, , a K the original prob
abilities Ti/t, ' ' , Tat.
184
ZEROSUM TWOPERSON GAMES: EXAMPLES
Wl( and 9(Tl" are personal moves of players 1 and 2 respectively. Both have
a, alternatives; we denote the corresponding choices by o^ = 1,  , a K
and <r" = 1, , a K . It is immaterial in which order these moves are
made, but we prescribe that both moves must be made without any infor
mation concerning the outcome of any moves (including the other move
3fl, 9TC") We define a function S(<r', <r") by this matrix scheme. (Cf.
Figure 35. The matrix element is 5(a', a") 1 ) The influence of 3ffl^, 9TC"
i.e. of the corresponding (personal) choices a, <r" on the outcome of the
game is the same as that of STl* would have been with the corresponding
(chance) choice a* = 6(<r, <r"). We denote this new game by F*. We
claim that the strategical possibilities of F* are the same as those of F.
a*
a
2
 1
Figure 35.
18.6.3. Indeed: Let player 1 use in F* a given mixed strategy of F with
the further specification concerning the move 9Tl^ 2 to choose all a' K = 1,
,, with the same probability 1 /a*. Then the game F* with this
strategy of player 1 will be from player 2's point of view the same as F.
This is so because any choice of his at 9Tl^ (i.e. any cr" = 1, , a)
produces the same result as the original chance move 9R,: One look at
Figure 35 will show that the a" = cr" column of that matrix contains every
number <r = 6(a', <r") = !,,, precisely once, i.e. that 5(er', <r")
will assume every value !,, (owing to player 1's strategy) with the
same probability !/<, just as 9H, would have done. So from player Ts
point of view, F* is at least as good as F.
The same argument with players 1 and 2 interchanged hence with
the rows of the matrix in Figure 35 playing the roles which the column
had above shows that from player 2's point of view too F* is at least as
good as F.
1 Arithmetically
<r' a" f 1 for a' ^ <r'
*'*" + 1 + a* for <r' < <r'
Hence 5(<r', a") is always one of the numbers 1, , ot.
1 31li is his personal move, so his strategy must provide for it in r*. There was no
need for this in r, since 9R, was a chance move.
SOME ELEMENTARY GAMES 185
Since the viewpoints of the two players are opposite, this means that
F* and F are equivalent. 1
18.7. Interpretation of This Result
18.7.1. Repeated application to all chance moves of F, of the operation
described in 18.6.2., 18.6.3., will remove them all, thus proving the final
contention of 18.6.1. The meaning of this result may be even better under
stood if we illustrate it by some practical instances of this manipulation.
(A) Consider the following quite elementary "game of chance." The
two players decide, by a 50%50% chance device, who pays the other one
unit. The application of the device of 18.6.2., 18.6.3. transforms this game,
which consists of precisely one chance move, into one of two personal moves.
A look at the matrix of Figure 35 for <*< = 2 with the 3(</, <r") values 1, 2
replaced by the actual payments 1, 1 shows that it coincides with
Figure 12. Remembering 14.7.2., 14.7.3. we see that this means what is
plain enough directly that this is the game of Matching Pennies.
I.e. : Matching Pennies is the natural device to produce the probabilities
i> i by personal moves and imperfect information. (Recall 17.1.!)
(B) Modify (A) so as to allow for a "tie": The two players decide by a
33%, 33%, 33^% chance device who pays the other one unit, or whether
nobody pays anything at all. Apply again the device of 18.6.2., 18.6.3.
Now the matrix of Figure 35 with a K = 3 with the &(&', a") values 1, 2, 3
replaced by the actual payments 0, 1, 1 coincides with Figure 13. By
14.7.2., 14.7.3. we see that this is the game of Stone, Paper, Scissors.
I.e., Stone, Paper, Scissors is the natural device to produce the proba
bilities i, i, i by personal moves and incomplete information. (Recall
17.1.!)
18.7.2. (C) The d(<r', <r") of Figure 35 can be replaced by another func
tion, and even the domains a' K = !, , aand<r" = !, , a, by other
domains <r' K = 1, , a' K and o" = 1, , a", provided that the follow
ing remains true: Every column of the matrix of Figure 35 contains each
number 1, , a c the same number of times, 2 and every row contains
each number !,, the same number of times. 8 Indeed, the con
siderations of 18.6.2. made use of these two properties of S(<7<, a") (and of
:, *'.') only.
It is not difficult to see that the precaution of "cutting" the deck before
dealing cards falls into this category. When one of the 52 cards has to be
chosen by a chance move, with probability ^, this is usually achieved by
"mixing" the deck. This is meant to be a chance move, but if the player
who mixes is dishonest, it may turn out to be a "personal" move of his.
1 We leave it to the reader to cast these considerations into the precise formalism of 1 1.
and 17.2., 17.8.: This presents absolutely no difficulties, but it is somewhat lengthy.
The above verbal arguments convey the essential reason of the phenomenon under con
sideration in a clearer and simpler way we hope.
* Hence <*i/ times; consequently a' K must be a multiple of o.
1 Hence a"/c times; consequently a" must be a multiple of a,.
186 ZEROSUM TWOPERSON GAMES: EXAMPLES
As a protection against this, the other player is permitted to point out the
place in the mixed deck, from which the card in question is to be taken, by
"cutting" the deck at that point. This combination of two^moves even
if they are personal is equivalent to the originally intended chance move.
The lack of information is, of course, the necessary condition for the effec
tiveness of this device.
Here a, = 52, a' K = 52! = the number of possible arrangements of the
deck, a" = 52 the number of ways of " cutting." We leave it to the reader
to fill in the details and to choose the 5(<r^, a") for this setup. 1
19. Poker and Bluffing
19.1. Description of Poker
19.1.1. It has been stressed repeatedly that the case fti = ft 2 = 2 as
discussed in 18.3. and more specifically in 18.4., comprises only the very
simplest zerosum twoperson games. We then gave in 18.5. some instances
of the complications which can arise in the general zerosum twoperson
game, but the understanding of the implications of our general result
(i.e. of 17.8.) will probably gain more by the detailed discussion of a special
game of the more complicated type. This is even more desirable because
for the games with 0i = $2 = 2 the choices of the r\, r 2 , called (pure) strate
gies, scarcely deserve this name: just calling them "moves" would have
been less exaggerated. Indeed, in these extremely simple games there
could be hardly any difference between the extensive and the normalized form ;
and so the identity of moves and strategies, a characteristic of the normalized
form, is inescapable in these games. We shall now consider a game in the
extensive form in which the player has several moves, so that the passage
to the normalized form and to strategies is no longer a vacuous operation.
19.1.2. The game of which we give an exact discussion is Poker. 2 How
ever, actual Poker is really a much too complicated subject for an exhaustive
discussion and so we shall have to subject it to some simplifying modifica
1 We assumed that the mixing is used to produce only one card. If whole " hands"
are dealt, "cutting" is not an absolute safeguard. A dishonest mixer can produce corre
lations within the deck which one "cut" cannot destroy, and the knowledge of which
gives this mixer an illegitimate advantage.
1 The general considerations concerning Poker and the mathematical discussions of
the variants referred to in the paragraphs which follow, were carried out by J. von
Neumann in 192628, but not published before. (Cf . a general reference in " Zur Theorie
der Gesellschaftsspiele," Math. Ann., Vol. 100 [1928]). This applies in particular to the
symmetric variant of 19.4.19.10., the variants (A), (B) of 19.11.19.13., and to the entire
interpretation of "Bluffing" which dominates all these discussions. The unsymmetric
variant (C) of 19.14.19.16. was considered in 1942 for the purposes of this publication.
The work of E. Borel and /. Ville, referred to in footnote 1 on p. 154, also con
tains considerations on Poker (Vol. IV, 2: "Applications aux Jeux de Hasard," Chap.
V: "Le jeu de Poker"). They are very instructive, but mainly evaluations of prob
abilities applied to Poker in a more or less heuristic way, without a systematic use of any
underlying general theory of games.
A definite strategical phase of Poker ("La Relance"  "The Overbid") is analyzed
on pp. 9197 loc. cit. This may be regarded also as a simplified variant of Poker,
POKER AND BLUFFING 187
tions, some of which are, indeed, quite radical. 1 It seems to us, neverthe
less, that the basic idea of Poker and its decisive properties will be conserved
in our simplified form. Therefore it will be possible to base general con
clusions and interpretations on the results which we are going to obtain
by the application of the theory previously established.
To begin with, Poker is actually played by any number of persons, 2
but since we are now in the discussion of zerosum twoperson games, we
shall set the number of players at two.
The game of Poker begins by dealing to each player 5 cards out of a
deck. 8 The possible combinations of 5 which he may get in this way
there are 2,598,960 of them 4 are called "hands" and arranged in a linear
order, i.e. there is an exhaustive rule defining which hand is the strongest
of all, which is the second, third, strongest down to the weakest. 5
Poker is played in many variants which fall into two classes: "Stud" and
"Draw" games. In a Stud game the player's hand is dealt to him in its
entirety at the very beginning, and he has to keep it unchanged throughout
the entire play. In "Draw" games there are various ways for a player to
exchange all or part of his hand, and in some variants he may get his hand
in several successive stages in the course of the play. Since we wish to
discuss the simplest possible form, we shall examine the Stud game only.
In this case there is no point in discussing the hands as hands, i.e. as
combinations of cards. Denoting the total number of hands by S
S = 2,598,960 for a full deck, as we saw we might as well say that each
comparable to the two which we consider in 19.4. 19. 10. and 19.1419.16. It is actually
closely related to the latter.
The reader who wishes to compare these two variants, may find the following indica
tions helpful:
(I) Our bids a, 6 correspond to 1 + a, 1 loc. cit.
(II) The difference between our variant of 19.4.19.10. and that in loc. cit. is this:
If player 1 begins with a "low" bid, then our variant provides for a comparison of hands,
while that in loc. cit. makes him lose the amount of the "low" bid unconditionally. I.e.
we treated this initial "low" bid as "seeing" cf. the discussion at the beginning of
19.14., particularly footnote 1 on p. 211 while in loc. cit. it is treated as "passing."
We believe that our treatment is a better approximation to the phase in question in real
Poker; and in particular that it is needed for a proper analysis and interpretation of
"Bluffing." For technical details cf. footnote 1 on p. 219.
1 Cf. however 19.11. and the end of 19.16.
2 The "optimum" in a sense which we do not undertake to interpret is supposed
to be 4 or 5.
3 This is occasionally a full deck of 52 cards, but for smaller numbers of participants
only parts of it usually 32 or 28 are used. Sometimes one or two extra cards with
special functions, "jokers," are added.
4 This holds for a full deck. The reader who is conversant with the elements of com
binatorics will note that this is the number of "combinations without repetitions of
5 out of 52":
/52\ ^52.5150.49.48
U/ * 1.23.4.5 = 2 > 598 > 960 
& This description involves the well known technical terms "Royal Flush," "Straight
Flush," "Four of a Kind," "Full House," etc. There is no need for us to discuss them
here.
188 ZEROSUM TWOPERSON GAMES: EXAMPLES
player draws a number s = 1, , S instead. The idea is that s = <S
corresponds to the strongest possible hand, a = S 1 to the second
strongest, etc., and finally s = 1 to the weakest. Since a " square deal"
amounts to assuming that all possible hands are dealt with the same prob
ability, we must interpret the drawing of the above number s as a chance
move, each one of the possible values s = 1, , S having the same
probability 1/8. Thus the game begins with two chance moves: The
drawing of the number s for player 1 and for player 2, 1 which we denote
by 81 and $2.
19.1.3. The next phase of the general game of Poker consists of the
making of "Bids" by the players. The idea is that after one of the players
has made a bid, which involves a smaller or greater amount of money, his
opponent has the choice of " Passing," "Seeing," or "Overbidding."
Passing means that he is willing to pay, without further argument, the
amount of his last preceding bid (which is necessarily lower than the present
bid). In this case it is irrelevant what hands the two players hold. The
hands are not disclosed at all. "Seeing" means that the bid is accepted:
the hands will be compared and the player with the stronger hand receives
the amount of the present bid. "Seeing" terminates the play. "Overbid
ding" means that the opponent counters the present bid by a higher one,
in which the roles of the players are reversed and the previous bidder has
the choice of Passing, Seeing or Overbidding, etc. 2
19.2. Bluffing
19.2.1. The point in all this is that a player with a strong hand is likely
to make high bids and numerous overbids since he has good reason to
expect that he will win. Consequently a player who has made a high bid,
or overbid, may be assumed by his opponent a posteriori! to have a
strong hand. This may provide the opponent with a motive for " Passing."
However, since in the case of "Passing" the hands are not compared, even
a player with a weak hand may occasionally obtain a gain against a stronger
opponent by creating the (false) impression of strength by a high bid, or
by overbid, thus conceivably inducing his opponent to pass.
This maneuver is known as "Bluffing." It is unquestionably prac
ticed by all experienced players. Whether the above is its real motivation
may be doubted; actually a second interpretation is conceivable. That is
if a player is known to bid high only when his hand is strong, his opponent is
likely to pass in such cases. The player will, therefore, not be able to collect
on high bids, or on numerous overbids, in just those cases where his actual
strength gives him the opportunity. Hence it is desirable for him to create
1 In actual Poker the second player draws from a deck from which the first player's
hand has already been removed. We disregard this as we disregard some other minor
complications of Poker.
2 This scheme is usually complicated by the necessity of making unconditional pay
ments, the "ante," at the start, in some variants for the first bidder, in others for all
those who wish to participate, again in others extra payments are required for the privi
lege of drawing, etc. We disregard all this.
POKER AND BLUFFING 189
uncertainty in his opponent's mind as to this correlation, i.e. to make it
known that he does occasionally bid high on a weak hand.
To sum up: Of the two possible motives for Bluffing, the first is the
desire to give a (false) impression of strength in (real) weakness; the second
is the desire to give a (false) impression of weakness in (real) strength.
Both are instances of inverted signaling (cf. 6.4.3.), i.e. of misleading the
opponent. It should be observed however that the first type of Bluffing
is most successful when it "succeeds," i.e. when the opponent actually
"passes," since this secures the desired gain; while the second is most
successful when it "fails," i.e. when the opponent "sees," since this will
convey to him the desired confusing information. 1
19.2.2. The possibility of such indirectly motivated hence apparently
irrational bids has also another consequence. Such bids are necessarily
risky, and therefore it can conceivably be worth while to make them riskier
by appropriate counter measures, thus restricting their use by the oppon
ent. But such counter measures are ipso facto also indirectly motivated
moves.
We have expounded these heuristic considerations at such length
because our exact theory makes a disentanglement of all these mixed
motives possible. It will be seen in 19.10. and in 19.15.3., 19.16.2. how the
phenomena which surround Bluffing can be understood quantitatively,
and how the motives are connected with the main strategic features of the
game, like possession of the initiative, etc.
19.3. Description of Poker (Continued)
19.3.1. Let us now return to the technical rules of Poker. In order to
avoid endless overbidding the number of bids is usually limited. 2 In order
to avoid unrealistically high bids with hardly foreseeable irrational effects
upon the opponent there are also maxima for each bid and overbid.
It is also customary to prohibit too small overbids; we shall subsequently
indicate what appears to be a good reason for this (cf. the end of 19.13.).
We shall express these restrictions on the size of bids and overbids in the
simplest possible form: We shall assume that two numbers a, b
a > b >
1 At this point we might be accused once more of disregarding our previously stated
guiding principle; the above discussion obviously assumes a series of plays (so that
statistical observation of the opponent's habits is possible) and it has a definitely
"dynamical" character. And yet we have repeatedly professed that our considerations
must be applicable to one isolated play and also that they are strictly statical.
We refer the reader to 17.3., where this apparent contradiction has been carefully
examined. Those considerations are fully valid in this case too, and should justify our
procedure. We shall add now only that our inconsistency the use of many plays and
of a dynamical terminology is a merely verbal one. In this way we were able to make
our discussions more succinct and more akin to the way that these things are talked about
in everyday language. But in 17.3. it was elaborated how all these questionable pictures
can be replaced by the strictly static problem of finding a good strategy.
1 This is the stop rule of 7.2.3.
190 ZEROSUM TWOPERSON GAMES: EXAMPLES
are given ab inilio, and that for every bid there are only two possibilities:
the bid may be "high," in which case it is a; or "low," in which case it is
6. By varying the ratio a/b which is clearly the only thing that matters
we can make the game risky when a/b is much greater than 1, or relatively
safe when a/b is only a little greater than 1.
The limitation of the number of bids and overbids will now be used for a
simplification of the entire scheme. In the actual play one of the players
begins with the initial bid; after that the players alternate.
The advantage or disadvantage contained in the possession of the
initiative by one player but concurrent with the necessity of acting first!
constitutes an interesting problem in itself. We shall discuss an (unsym
metric) form of Poker where this plays a role in 19.14., 19.15. But we
wish at first to avoid being saddled ^yvith this problem too. In other words,
we wish to avoid for the moment all deviations from symmetry, so as to
obtain the other essential features of Poker in their purest and simplest
form. We shall therefore assume that the two players both make initial
bids, each one ignorant of the other's choice. Only after both have made
this bid is each one informed of what the other did, i.e. whether his bid was
"high "or "low."
19.3.2. We simplify further by giving to the players only the choice of
"Passing" or "Seeing," i.e. by excluding "Overbidding." Indeed, "Over
bidding" is only a more elaborate and intensive expression of the tendency
which is already contained in a high initial bid. Since we wish to do things
as simply as possible, we shall avoid providing several channels for the same
tendency. (Cf. however (C) in 19.11. and 19.14., 19.15.).
Accordingly we prescribe these conditions: Consider the moment
when both players are informed of each other's bids. If it then develops
that both bid "high" or that both bid "low," then the hands are compared
and the player with the stronger hand receives the amount a or b respectively
from his opponent. If their hands are equal, no payment is made. If on
the other hand one bids "high" and one bids "low," then the player with
the low bid has the choice of "Passing" or "Seeing." "Passing" means
that he pays to the opponent the amount of the low bid (without any
consideration of their hands). "Seeing" means that he changes over
from his "low" bid to the "high" one, and the situation is treated as if
they both had bid "high" in the first place.
19.4. Exact Formulation of the Rules
19.4. We can now sum up the preceding description of our simplified
Poker by giving an exact statement of the rules agreed upon:
First, by a chance move each player obtains his "hand," a number
s = 1, S, each one of these numbers having the same probability 1/5.
We denote the hands of players 1, 2, by 81, 82 respectively.
After this each player will, by a personal move, choose either a or b,
the "high" or "low" bid. Each player makes his choice (bid) informed
POKER AND BLUFFING 191
about his own hand, but not about his opponent's hand or choice (bid).
Lastly, each player is informed about the other's choice but not about his
hand. (Each still knows his own hand and choice.) If it turns out that
one bids "high" and the other "low," then the latter has the choice of
'' Seeing" or "Passing."
This is the play. When it is concluded the payments are made as
follows: If both players bid "high," or if one bids "high," and the other
bids "low" but subsequently "Sees," then for i = $ 2 player 1 obtains
<
a
from player 2 the amount respectively. If both players bid "low,"
a
> b
then for si = s 2 player 1 obtains from player 2 the amount respectively.
< b
If one player bids "high," and the other bids "low" and subsequently
"Passes," then the "high bidder" being ^ player 1 obtains from player 2
the amount , .
o
19.6. Description of the Strategies
19.6.1. A (pure) strategy in this game consists clearly of the following
specifications : To state for every " hand " s = 1, , S whether a "high"
or a "low" bid will be made, and in the latter case the further statement
whether, if this "low" bid runs into a "high" bid of the opponent, the player
will "See" or "Pass." It is simpler to describe this by a numerical index
i. = 1, 2, 3; i t = I meaning a "high" bid; i, = 2 meaning a "low" bid
with subsequent "Seeing" (if the occasion arises); t. = 3 meaning a "low"
bid with subsequent "Passing" (if the occasion arises). Thus the strategy
is a specification of such an index i, for every s = 1, , S, i.e. of the
sequence t'i, , is.
This applies to both players 1 and 2. Accordingly we shall denote the
above strategy by 2i(t'i, is) or 2 2 ( ji, , js).
Thus each player has the same number of strategies: as many as there
are sequences i\ } , is, i.e. precisely 3 s . With the notations of 11.2.2.
0i = 182 = = 3 s .
1 For the sake of absolute formal correctness this should still be arranged according
to the patterns of 6. and 7. in Chapter II. Thus the two firstmentioned chance moves
(the dealing of hands) should be called moves 1 and 2; the two subsequent personal njoves
(the bids), moves 3 and 4; and the final personal move ("Passing" or "Seeing"), move 5.
In the case of move 5, both the player whose personal move it is, and the number
of alternatives, depend on the previous course of the play as described in 7.1.2. and 9.1.5.
(If both players bid "high" or both bid "low," then the number of alternatives is 1, and
it does not matter to which player we ascribe this vacuous personal move. If one bids
"high" and the other bids "low," then the personal move is the "low" bidder's).
A consistent use of the notations loc. cit. would also necessitate writing <n, <n for
i, I;<TI, <rifor the "high" or "low" bid;** for "Passing" or "Seeing."
We leave it to the reader to iron out all these differences.
192
ZEROSUM TWOPERSON GAMES: EXAMPLES
If we wanted to adhere rigorously to the notations of loc. cit., we should
now enumerate the sequences fi, , i s with a r\ = 1, , ft and then
denote the (pure) strategies of the players 1, 2 by S^, S^. But we prefer
to continue with our present notations.
We must now express the payment which player 1 receives if the strate
gies 2i(t"i, , is), 2j(ji, , js) are used by the two players. This
is the matrix element 3C(i\, , t'alji, * * , js). 1
If the players have actually the "hands" i, $2 then the payment
received by player 1 can be expressed in this way (using the rules stated
above): It is 90n (tj(i it j,) where sgn($i s 2 ) is the sign of s\ s 2 , 2 and
where the three functions
+(i, j), JB (t, j), (*, j) f, j = 1, 2, 3.
can be represented by the following matrix schemes: 3
^
1
2
3
1
a
a
&
2
a
6
&
3
6
6
6
\ 3
1
2
3
1
&
2
3
6
\j
1
2
3
1
a
a
b
2
a
6
b
3
6
b
&
Figure 36. Figure 37. Figure 38.
Now i, s 2 originate from chance moves, as described above. Hence
8
1
9 a i, , ja) 2
19.5.2. We now pass to the (mixed) strategies in the sense of 17.2.
These are the vectors , rj belonging to Sp. Considering the notations
1 The entire sequence t'i, , is is the row index, and the'entire sequence ji t ,
ja is the column index. In our original notations the strategies were Si, Sj and the
matrix element 3C(n, r 2 ).
I >
1 I.e. for s i 8 t respectively. It expresses in an arithmetical form which hand is
stronger.
8 The reader will do well to compare these matrix schemes with our verbal statements
of the rules, and to verify their appropriateness.
Another circumstance which is worth observing is that the symmetry of the game
corresponds to the identities
M*. 3) m (/ 0. o(i, j) JBo(j,
4 The reader may verify
as a consequence of the relations at the end of footnote 3 above. I.e.
3C(ti f , ia\ji,  ,j 5 )
is skewsymmetric, expressing once more the, symmetry of the game.
POKER AND BLUFFING 193
which we are now using, we must index the components of these vectors
also in the new way: We must write fy i , ^ ig instead of v i? v
We express (17:2) of 17.4.1., which evaluates the expectation value of
player 1's gain
, if) =
There is an advantage in interchanging the two S and writing
K( { , 77 ) = gj 2} 2^ Snri.c,.,) (*'.,, j'Ofc, ..... ^ ..... v
i.t i ..... /! ..... Js
If we now put
(19:1)
*ii i^d 'excluding ,
(19:2) <rj, =
/IP J's excluding /. f
then the above equation becomes
(19:3) K(7, 7) =
i,i tj
It is worth while to expound the meaning of (19:1)(19:3) verbally.
(19:1) shows that pji is the probability that player 1, using the mixed
strategy , will choose i when his "hand" is a\\ (19:2) shows that aJ is the
probability that player 2, using the mixed strategy y , will choose j when his
"hand" is S2. 1 Now it is intuitively clear that the expectation value
> >
K( , rj ) depends on these probabilities p{, <r$ only, and not on the underly
ing probabilities fc t < a , ^ ^ themselves. 2 The formula (19:3) can
1 We know from 19 4. that i or;  1 means a "high" bid, t  2, 3 a "low" bid with
(the intention of) a subsequent "Seeing" or "Passing" respectively.
2 This means that two different mixtures of (pure) strategies may in actual effect be
the same thing.
Let us illustrate this by a simple example. Put S 2, i.e. let there be only a "high "
and a "low" hand. Consider i  2, 3 as one thing, i.e. let there be only a "high" and a
194 ZEROSUM TWOPERSON GAMES: EXAMPLES
easily be seen to be correct in this direct way: It suffices to remember
the meaning of the JB w < ti . t )(i, j) and the interpretation of the pji, <rj.
19.5.3. It is clear, both from the meaning of the pji, oJi and from their
formal definition (19:1), (19:2), that they fulfill the .conditions
3
(19:4) all P ; ^ 0, p\> = 1
ti
3
(19:5) allcr}.0, irj. = 1
On the other hand, any pji, aj which fulfill these conditions can be obtained
from suitable , 77 by (19:1), (19:2). This is clear mathematically, 1 and
intuitively as well. Any such system of pji, <rj is one of probabilities which
define a possible modus procedendi, so they must correspond to some mixed
strategy.
(19:4), (19:5) make it opportune to form the 3dimensional vectors
P' 1 = (Pi 1 , P?S Pa 1 }, *'* = {*i f , <r' 2 2 , <rj.
Then (19:4), (19:5) state precisely that all p*i, a** belong to 5 3 .
This shows how much of a simplification the introduction of these
vectors is: (or 77 ) was a vector in 50, i.e. depending on ft 1 = 3 s 1
* *
numerical constants; the p '* (or the a ) are 5 vectors in 5 8 , i.e. each one
depends on 2 numerical constants; hence they amount together to 25
numerical constants. And 3 s 1 is much greater than 25, even for moder
ate 5. 2
"low" bid. Then there are four possible (pure) strategies, to which we shall give names*
"Bold": Bid "high" on every hand.
"Cautious": Bid "low" on every hand.
"Normal": Bid "high" on a "high" hand, "low" on a "low" hand.
"Bluff": Bid "high" on a "low" hand, "low" on a "high" hand.
Then a 5050 mixture of "Bold" and "Cautious" is in effect the same thing as a
5050 mixture of "Normal" and "Bluff": both mean that the player will according
to chance bid 5050 "high" or "low" on any hand.
Nevertheless these are, in our present notations, two different "mixed" strategies,
i.e. vectors .
This means, of course, that our notations, which were perfectly suited to the general
case, are redundant for many particular games. This is a frequent occurrence in mathe
matical discussions with general aims.
There was no reason to take account of this redundance as long as we were working
out the general theory. But we shall remove it now for the particular game under
consideration.
1 Put e.g. fy i s  P { t . . . p* , i, , t , a  <7} v . . . <r, s and verify the (17:1 :a),
(17:1 :b) of 17.2.1. as consequences of the above (19:4), (19:5).
8 Actually S is about 2J millions (cf. footnote 4 on p. 187); so 3 s 1 and 25 are both
great, but the former is quite exorbitantly the greater.
POKER AND BLUFFING 195
19.6. Statement of the Problem
19.6. Since we are dealing with a symmetric game, we can use the char
"""*
acterization of the good (mixed) strategies i.e. of the in A given in
(17:H) of 17.11.2. It stated this: must be optimal against itself, i.e.
> > >
Min K( , ij ) must be assumed for rj = .
Now we saw in 19.5. that K( { , t\ ) depends actually on the p % a . So
we may write for it, K(7s ,7V 1 , 7 5 ). Then (19:3) in 19.5.2.
states (we rearrange the S somewhat)
(19:6) K(7', ' ,7V\ , O = 55
* *
And the characteristic of the p l , , p 8 of a good strategy is that
Min^ 7,K(7 l , ' ' ' ,7V 1 ,  ,7 s )
is assumed at a l = p 1 y   , a s = p s . The explicit conditions for this
can be found in essentially the same way as in the similar problem of 17.9. 1. ;
we will give a brief alternative discussion.
The Min* > of (19:6) amounts to a minimum with respect to
ff l , . . . , O 8
> >
each cr l , , a s separately. Consider therefore such a a . It is
restricted only by the requirement to belong to $3, i.e. by
3
all o}i SO, X *'f = 1
yi
(19:6) is a linear expression in these three components ^V, orj*, cr. Hence
it assumes its minimum with respect to a '* there where all those component s
<rj which do not have the smallest possible coefficient (with respect to j,
cf. below), vanish.
The coefficient of <r* is
^ V . C n(. r .,)(i, J)P? to be denoted by g 7^.
,,f
Thus (19:6) becomes
(19:7) K(7S , 7 5 7s , 7
196 ZEROSUM TWOPERSON GAMES: EXAMPLES
>
And the condition for the minimum (with respect to a ) is this:
(19:8) For each pair s 2 , j, for which 7} does not assume its minimum
(in j *), we have crjt = 0.
Hence the characteristic of a good strategy minimization at a '= p l , ,
(19 :A) p *, , p a describe a good strategy, i.e. a in J!, if and
only if this is true:
For each pair s 2 , j for which 7j does not assume its minimum
(in j l ), we have p;* = 0.
We finally state the explicit expressions for the 7<, of course by using the
matrix schemes of Figures 3638. They are
(19:9:a) 7*1' =*
2
(ap\i + ap' 2 
i
(19:9:b)
(19:9:c)
19.7. Passage from the Discrete to the Continuous Problem
19.7.1. The criterion (19 :A) of 19.6., together with the formulae (19:7),
(19:9:a), (19:9:b), (19:9:c), can be used to determine all good strategies. 2
This discussion is of a rather tiresome combinatorial character, involving
the analysis of a number of alternatives. The results which are obtained
1 We mean in j and not in s, j\
2 This determination has been carried out by one of us and will be published elsewhere.
POKER AND BLUFFING
197
are qualitatively similar to those'which we shall derive below under some
what modified assumptions, except for certain differences in very delicate
detail which may be called the "fine structure" of the strategy. We shall
say more about this in 19.12.
For the moment we are chiefly interested in the main features of the solu
tion and not in the question of "fine structure. " We begin by turning our
attention to the " granular " structure of the sequence of possible hands
8 = 1,  '  , S.
If we try to picture the strength of all possible "hands" on a scale from
0% to 100%, or rather of fractions from to 1, then the weakest possible
hand, 1, will correspond to 0, and the strongest possible hand, 5, to 1.
Hence the "hand" s(= 1, , S) should be placed at z = ~ __ 1 on this
scale. I.e. we have this correspondence:
Old scale:
S SB
1
2
3
S  1
S
1
2
S 2
I
S  1
S  I
S  1
Figure 39.
Thus the values of z fill the interval
(19:10) ^ z g 1
very densely, 1 but they form nevertheless a discrete sequence. This is the
"granular" structure referred to above. We will now replace it by a
continuous one.
I.e. we assume that the chance move which chooses the hand s i.e. z
may produce any z of the interval (19:10). We assume that the probability
of any part of (19:10) is the length of that part, i.e. that z is equidistributed
over (19:10). 2 We denote the "hands" of the two players 1, 2 by *i, z a
respectively.
19.7.2. This change entails that we replace the vectors
>
p , a i (si, ,, = 1, , S) by vectors p , <r (0 ^ *i, z* ^ 1); but
they are, of course, still probability vectors of the same nature as before,
i.e. belonging to S*. In consequence, the components (probabilities)
p' 1 , *',* (*i> s 2 = 1, , S; i, j = 1, 2, 3) give way to the components
Pjs<rj(0 ^ *,,z 2 ^ l;i,j = 1,2,3). Similarly the y',* (in (19:9:a), (19:9:b),
(19:9:c) of 19.6.) become 75*.
We now rewrite the expressions for K and the 7} in the formulae (19:7),
(19:9:a), (19:9:b), (19:9:c) in 19.6. Clearly all sums
1 It will be remembered (cf. footnote 4 on p. 187) that.S is about 2 J millions.
1 This is the socalled geometrical probability.
198 ZEROSUM TWOPERSON GAMES: EXAMPLES
s s
1 y
must be replaced by integrals
/:
sums
11
by integrals
r _ r +
dzi,
/*
./O
while isolated terms behind a factor 1//S may be neglected. 1 ' 2 These being
understood, the formulae for K and the 7} (i.e. 7*) become:
(19:7*) K =
(19:9:a*) y{> = [ " (opf  op}
(19:9:b*) 7 J. = * (op[.  b p ;>  fepj.)<fci + (opj. + bp' t >
(19:9:c*) yl*
And the characterization (19 :A) of 19.6. goes over into this:
(19:B) The 7 (0 ^ 2 g 1) (they all belong to 8 9 ) describe a good
strategy if and only if this is true:
For each z, j for which 7* does not assume its minimum
(in ,;' a )> we have pj = O. 4
Specifically we mean the middle terms bp\* and 6pJ in (19:9:a) and (19:9:c).
1 These terms correspond to i = * 2 , in our present setup to z\ j, and since the
i, 2s are continuous variables, the probability of their (fortuitous) coincidence is indeed 0.
Mathematically one may describe these operations by saying that we are now
carrying out the limiting process S > oo .
8 We mean in ,; and not in z, j\
4 The formulae (19:7*), (19:9:a*), (19:9:6*), ( 19 :9:c*) and this criterion could also have
> *
been derived directly by discussing this "continuous" arrangement, with the p f i, p f i
in place of the , 17 from the start. We preferred the lengthier and more explicit proce
dure followed in 19.4. 19.7. in order to make the rigor and the completeness of our proce
dure apparent. The reader will find it a good exercise to carry out the shorter direct
discussion, mentioned above.
It would be tempting to build up a theory of games, into which such continuous
parameters enter, systematically and directly; i.e. in sufficient generality for applications
like the present one, and without the necessity for a limiting process from discrete games.
An interesting step in this direction was taken by /. Ville in the work referred to in
footnote 1 on p. 154: pp. 110113 loc. cit. The continuity assumptions made there seem,
however, to be too restrictive for many applications, in particular for the present one.
POKER AND BLUFFING 199
19.8. Mathematical Determination of the Solution
19.8.1. We now proceed to the determination of the good strategies p ',
i.e. of the solution of the implicit condition (19:B) of 19.7.
Assume first that p > ever happens. 1 For such a z necessarily
Min, 75 = y\ hence y\ ^ y* 2 i.e.
7*2  y\ ^ 0.
Substituting (19:9:a*), (19:9:b*) into this gives
(19:11) (a  6) ( fydz,  f* P jcfei) + 26 p^dz, g 0.
Now let z be the upper limit of these z with p\ > O. 2 Then (19:11) holds
by continuity for z = Z Q too. As fy > does not occur for Zi > 2 by
hypothesis so the / p dz\ term in (19:11) is now 0. So we may write it
J *o
with + instead of , and (19:11) becomes:
(a  6) P pjufei + 26 P pjufei ^ 0.
o J *
But pi is always ^ and sometimes > 0, by hypothesis; hence the first
term is > O. 8 ' 4 The second term is clearly g: 0. So we have derived
a contradiction. I.e. we have shown
(19:12) pj = O. 6
19.8.2. Having eliminated j = 2 we now analyze the relationship of
j = I and j = 3. Since p = so p\ + pj = 1 i.e.:
(19:13) P J = 1  pj,
and consequently
(19:14) ^ pj g 1.
Now there may exist in the interval ^ z g 1 subintervals in which
always p* = or always p\ = I. 6 A z which is not inside any interval of
1 I.e. that the good strategy under consideration provides for j 2, i.e. "low"
bidding with (the intention of) subsequent "Seeing," under certain conditions.
1 I.e. the greatest 2for which pj > occurs arbitrarily near to z. (But we do not
require pj > for all z < Z Q .) This 2 exists certainly if the z with pj > exist.
1 Of course a b > 0.
4 It does not seem necessary to go into the detailed fine points of the theory of integra
tion, measure, etc. We assume that our functions are smooth enough so that a positive
function has a positive integral etc. An exact treatment could be given with ease if we
made use of the pertinent mathematical theories mentioned above.
8 The reader should reformulate this verbally: We excluded "low" bids with (the
intention of) subsequent "Seeing" by analysing conditions for the (hypothetical) upper
limit of the hands for which this would be done; and showed that near there, at least, an
outright "high" bid would be preferable.
This is, of course, conditioned by our simplification which forbids "overbidding."
6 I.e. where the strategy directs the player to bid always "high," or where it directs
him to bid always "low" (with subsequent "Passing").
200 ZEROSUM TWOPERSON GAMES: EXAMPLES
either kind i.e. arbitrarily near to which both pj' 5* and pf ^ 1 occur
will be called intermediate. Since p\' ^ or p\' 7* 1 (i.e. p' /^ 0) imply
Min, 75' = 7*' or y\ f respectively, therefore we see: Both y[' g y s ' and
7*' ^ 7a' occur arbitrarily near to an intermediate 2. Hence for such a z,
y\ = 7s by continuity, 1 i.e.
7' 8  7! = 0.
Substituting (19:9:a*), (19:9:c*) and recalling (19:12), (19:13), gives
(a + 6) ( Z p^dz,  (a  6) f l p^dz, + 26 f l (1  P \^dz l =
JQ Js Jt
i.e.
(19:15) (a + 6) p^dz,  Vi^i + 26 ( 1  *) = 0.
Consider next two intermediate ', z". Apply (19:15) to z = z' and
e = z" and subtract. Then
2(a + b)
obtains, i.e.
(19:16)
Verbally: Between two intermediate z', z" the average of p{ is r ,
So neither p* s= nor p\ = 1 can hold throughout the interval
z' g z ^ z"
since that would yield the average or 1. Hence this interval must contain
(at least) a further intermediate z, i.e. between any two intermediate places
there lies (at least) a third one. Iteration of this result shows that between
two intermediate z', z" the further intermediate z lie everywhere dense.
Hence the z', z" for which (19:16) holds lie everywhere dense between z', z" .
But then (19:16) must hold for all 5', z" between 2', z", by continuity. 2
This leaves no alternative but that pi = r r everywhere between z' z" *
a r o
1 The 7* are defined by integrals (19:9:a*), (19:9:b*), ( 19 :9:c*), hence they are certainly
continuous.
1 The integral in (19:16) is certainly continuous.
* Clearly isolated exceptions covering a z area i.e. of total probability zero (e.g. a
finite number of fixed z's) could be permitted. They alter no integrals. An exact
mathematical treatment would be easy but does not seem to be called for in this context
(cf. footnote 4 on p. 199). So it seems simplest to assume pf pr in 2' ^ z <: 2"
a p o
without any exceptions.
This ought to be kept in mind when appraising the formulae of the next pages which
deal with the interval 2' ^ z ^ 2" on one hand, and with the intervals : z < I 1 and
I" < z' g 1 on the other; i.e. which count the points 2', I" to the firstmentioned interval.
This is, of course, irrelevant: two fixed isolated points z' and z" in this case could
be disposed of in any way (cf. above).
The reader must observe, however, that while there is no significant difference
POKER AND BLUFFING 201
19.8.3. Now if intermediate z exist at all, then there exists a smallest
one and a largest one; choose z', 2" as these. We have
(19:17) p\ = ^pj throughout ti z z".
If no intermediate z exist, then we must have p\ = (for all z) or p\ ss 1
(for all z). It is easy to see that neither is a solution. 1 Thus intermediate z
do exist and with them z', z" exist and (19:17) is valid.
19.8.4. The left hand side of (19:15) is y\  y\ for all z\ hence for z = 1
(since pi i = is excluded). By continuity y\ y\ > 0, i.e. 7i < 7! remains
true even when z is merely near enough to 1. Hence pj = 0, i.e. p* = 1 for
these z. Thus (19:17) necessitates z" < 1. Now no intermediate z exists
in 2" ^ z g 1; hence we have p\ = or p\ = 1 throughout this interval.
Our preceding result excludes the former. Hence
(19:18) pi s 1 throughout z" ^ z 1.
19.8.5. Consider finally the lower end of (19:17), z'. If z' > then we
have an interval ^ z g z'. This interval contains no intermediate z;
hence we have p\ = or pi = 1 throughout z '. The first derivative
of 7S  7ii ie of the left side of (19:15), is clearly 2(a + 6)pi  26. Hence
in ^ z < z' this derivative is 2(a + 6)  26 = 26 < if pi as
there, 2(a + 6) 1  26 = 2a > if p\ = 1 there, i.e. 7*3  y\ is monotone
decreasing or increasing respectively, throughout ^ z < z f . Since its
value is at the upper end (the intermediate point z')> we have 73 7* >
or < respectively, i.e. y\ < y\ or 75 < y\ respectively, throughout
^ z < '. The former necessitates p\ = 0, pi ss 1 the latter pi = in
^ z < z'\ but the hypotheses with which we started were pi as or pi B 1
respectively, there. So there is u contradiction in each case.
Consequently
(19:19) z' = 0.
19.8.6. And now we determine z" by expressing the validity of (19:15)
for the intermediate z = z' = 0. Then (19:15) becomes
(a + 6) f 1 pi^ 1 + 26 =
between a < and a ^ when the z'a themselves are compared, this is not so for the y' r
Thus we saw that y' > y\ implies p\  0, while y\ ^ 71 nas no such consequence.
(Cf. also the discussion of Fig. 41 and of Figs. 47, 48.)
1 I.e. bidding "low" (with subsequent "Pass") under all conditions is not a good
strategy; nor is bidding "high" under all conditions.
Mathematical proof: For p\ m 0: Compute 7?  6, 71  & hence 7? < y\
contradicting p\  1 ^ 0. For pj m 1: Compute 7?  <*, 7i  & hence 7? < ?!
contradicting pj 1 ^ 0.
202 ZEROSUM TWOPERSON GAMES: EXAMPLES
But (19:17), (19:18), (19:19) give
r^r.J^ + uni
Jo a + b
+ 6
So we have
26
+ 6
o 2/ , = t _ 26 = g6
+ 6 + 6 + 6'
i.e.
(19:20) 2" = "
Combining (19:17), (19:18), (19:19), (19:20) gives:
6 for g z g ^=
(19:21) ,1 + b
= 1 for
Together with (19:12), (19:13) this characterizes the strategy completely.
19.9. Detailed Analysis of the Solution
19.9.1. The results of 19.8. ascertain that there exists one and only one
good strategy in the form of Poker under consideration. 1 It is described
by (19:21), (19:12), (19:13) in 19.8. We shall give a graphical picture of this
strategy which will make it easier to discuss it verbally in what follows.
(Cf. Figure 40. The actual proportions of this figure correspond to
a/6 ~ 3.)
The line plots the curve p = p\. Thus the height of above
the line p = is the probability of a "high" bid: p\; the height of the line
p = 1 above is the probability of a "low" bid (necessarily with sub
sequent "pass"): p\ = 1 p\.
19.9.2. The formulae (19:9:a*), (19:9:b*), (19:9:c*) of 19.7. permit us
now to compute the coefficients 7*. We give the graphical representations
instead of the formulae, leaving the elementary verification to the reader.
(Cf. Figure 41. The actual proportions are those of Figure 40, i.e. a/6 ~ 3
cf. there.) The line plots the curve 7 = 7*; the line plots the
curve 7 = 7j ; the line plots the curve 7 = y\. The figure shows that
1 We have actually shown only that nothing else but the strategy determined in 19.8.
can be good. That this strategy is indeed good, could be concluded from the established
existence of (at least) a good strategy, although our passage to the "continuous" case
may there create some doubt. But we shall verify below that the strategy in question is
good, i.e. that it fulfills (19:B) of 19.7.
POKER AND BLUFFING
and (i.e. y\ and 75) coincide in g z g a ~
203
and that and
(i.e. y\ and 7$) coincide in ^ z ^ 1. All three curves are made
a  6
a
Figure 40.
Figure 41.
of two linear pieces each, joining at z =
a  6
The actual values of the
7* at the critical points
0, , 1 are given in the figure. 1
1 The simple computational verification of these results is left to the reader.
204 ZEROSUM TWOPERSON GAMES: EXAMPLES
19.9.3. Comparison of Figures 40 and 41 shows that our strategy is
indeed good, i.e. that it fulfills (19 :B) of 19.7. Indeed: In ^ z ^ ^
where both p{ 5^ 0, pj 5* both y{ and yl are the lowest curves, i.e. equal
to Min, y'j. In fl ~" < z ^ 1 where only p{ 5* there only 7? is the
lowest curve, i.e. equal to Min, y]. (The behavior of y\ does not matter,
since always pj = 0.)
We can also compute K from (19:7*) in 19.7., the value of a play. K =
is easily obtained; and this is the value to be expected since the game is
symmetric.
19.10. Interpretation of the Solution
19.10.1. The results of 19.8., 19.9., although mathematically complete,
call for a certain amount of verbal comment and interpretation, which we
now proceed to give.
First the picture of the good strategy, as given in Figure 40, indicates
that for a sufficiently strong hand p{ = 1; i.e. that the player should then
bid "high," and nothing else. This is the case for hands z > For
weaker hands, however, p\ = ^r~r> p\ = 1 pf = . , ; so both p{, pj 7* 0;
i.e. the player should then bid irregularly "high" and "low" (with specified
probabilities). This is the case for hands z g The "high" bids
(in this case) should be rarer than the "low" ones, indeed  = = and a > 6.
PI o
This last formula shows too that the last kind of "high" bids become
increasingly rare if the cost of a "high" bid (relative to a "low" one)
increases.
Now these "high" bids on "weak" hands made irregularly, governed
by (specified) probabilities only, and getting rarer when the cost of "high"
bidding is increased invite an obvious interpretation: These are the
"Bluffs" of ordinary Poker.
Due to the extreme simplifications which we applied to Poker for the
purpose of this discussion, "Bluffing" comes up in a very rudimentary form
only; but the symptoms are nevertheless unmistakable: The player is
advised to bid always "high" on a strong hand f z > J and to bid
mostly "low" (with the probability jprj) on a "low" one ( z < ?ZjM
but with occasional, irregularly distributed "Bluffs" (with the probability
POKER AND BLUFFING 205
19.10.2. Second, the conditions in the zone of "Bluffing," z a ,
a
throw some light on another matter too, the consequences of deviating
from the good strategy, "permanent optimality," "defensive," "offensive,"
as discussed in 17.10.1., 17.10.2.
Assume that player 2 deviates from the good strategy, i.e. uses proba
bilities (Ty which may differ from the pj obtained above. Assume, further
more, that player 1 still uses those p*, i.e. the good strategy. Then we can
use for the 7' of (19:9a*), (19:9:b*), (19:9:c*) in 19.7., the graphical repre
sentation of Figure 41, and express the outcome of the play for player 1
by (19:7*) in 19.7.
(19:22)
Consequently player 2's <r* are optimal against player 1's p* if the analogue
of the condition (19:8) in 19.6. is fulfilled:
(19 :C) For each pair z, j for which y] does not assume its minimum
(in j *) we have a* = 0.
I.e. (19 :C) is necessary and sufficient for <7* being just as good against pj as p*
itself, that is, giving a K = 0. Otherwise a] is worse, that is, giving a
K > 0. In other words:
(19 :D) A mistake, i.e. a strategy <r* which deviates from the good
strategy pj will cause no losses when the opponent sticks to the
good strategy if and only if the a* fulfill (19 :C) above.
Now one glance at Figure 41 suffices to make it clear that (19 :C) means
a  = ff\ = for z > a "" but merely a\ = for z ^ a ~^^ I.e.: (19:C)
ct a
prescribes "high" bidding, and nothing else, for strong hands ( z > J;
it forbids "low" bidding with subsequent "Seeing" for all hands, but it
foils to prescribe the probability ratio of "high" bidding and of "low"
bidding (with subsequent "Passing") for weak hands, i.e. in the zone of
"Bluffing"
19.10.3. Thus any deviation from the good strategy which involves
more than just incorrect "Bluffing," leads to immediate losses. It suffices
for the opponent to stick to the good strategy. Incorrect "Bluffing"
causes no losses against an opponent playing the good strategy; but the
1 We mean in j, and not in *, j!
a b
1 Actually even ?* would be permitted at the one place z But this
isolated value of * has probability and so it can be disregarded. Of. footnote 3 on
p. 200.
206 ZEROSUM TWOPERSON GAMES: EXAMPLES
opponent could inflict losses by deviating appropriately from the good
strategy. I.e. the importance of "Bluffing" lies not in the actual play,
played against a good player, but in the protection which it provides against
the opponent's potential deviations from the good strategy. This is in
agreement with the remarks made at the end of 19.2., particularly with the
second interpretation which we proposed there for " Bluffing." 1 Indeed, the
element of uncertainty created by "Bluffing" is just that type of constraint
on the opponent's strategy to which we referred there, and which was ana
lyzed at the end of 19.2.
Our results on "bluffing" fit in also with the conclusions of 17.10.2.
We see that the unique good strategy of this variant of Poker is not per
manently optimal; hence no permanently optimal strategy exists there.
(Cf. the first remarks in 17.10.2., particularly footnote 3 on p. 163.) And
"Bluffing" is a defensive measure in the sense discussed in the second half
of 17.10.2.
19.10.4. Third and last, let us take a look at the offensive steps indicated
loc. cit., i.e. the deviations from good strategy by which a player can profit
from his opponent's failure to "Bluff" correctly.
We reverse the roles: Let player 1 " Bluff " incorrectly, i.e. use p* different
from those of Figure 40. Since only incorrect "Bluffing" is involved, we
still assume
p* 2 == for all z
ab
Pf = 1 I
Pi = /
r n ^
for all z >
So we are interested only in the consequences of
(19:23) p\ % ^~ for some z = Z Q < ^~' 2
The left hand side of (19:15) in 19.8. is still a valid expression for y\ 7*!.
Consider now a z < z . Then ^ in (19:23) leaves / p^dzi unaffected, but
Jo
., increases n _ , , ., decreases ,, , ~, , , ., f ,+* * e \
it , / pi^azi hence it . the left hand side of (19:15), i.e.
decreases J z increases
7s 7i Since y\ y\ would be without the change (19:23) (cf. Figure
41), so it will now be ^ 0. I.e. y\ ^ y\. Consider next a z in
/ ^ a  6
ZQ < Z ^
a
1 All this holds for the form of Poker now under consideration. For further view
points cf. 19.16.
2 We need this really for more than one z, cf. footnote 3 on p. 200. The simplest
assumption is that these inequalities hold in a small neighborhood of the ZD in question.
It would be easy to treat this matter rigorously in the sense of footnote 4 on p. 199
and of footnote 3 on p. 200. We refrain from doing it for the reason stated there.
POKER AND BLUFFING 207
Then ^ in (19:23) " 1 " eases f* p ^ Zl wn ii e jt leaves f ' p{fei unaffected;
increases
hence it , the left hand side of (19:15), i.e. 7} y\. Since y\ y\
would be without the change (19:23) (cf. Fig. 41), so it will now be ^ 0.
I.e. 7f ^ 75. Summing up:
(19 :E) The change (19:23) with ^ causes
7s % y\ for z < z ,
7' 3 % 7l for z, < z ~^
Hence the opponent can gain, i.e. decrease the K of (19:22), t?y using &*
which differ from the present pj: For z < z by increasing * at the expense
i
of ff \, i.e. by . erf from the value of pf, jji to the extreme value ,
<rj increasing ' a + b 1
And for ZQ < z ^  by increasing * at the expense of ** i.e. by , .
a <r ^ <r\ decreasing
a\ from the value of pf, . to the extreme value n In other words:
(19:F) If the opponent " Bluffs" too .. . for a certain hand ZQ, then
little
he can be punished by the following deviations from the good
strategy: " Bluffing" for hands weaker than z^ and "Bluff
ing" , for hands stronger than ZQ.
less
I.e. by imitating his mistake for hands which are stronger
than ZQ and by doing the opposite for weaker ones.
These are the precise details of how correct " Bluffing" protects against
too much or too little "Bluffing" of the opponent, and its immediate
consequences. Reflections in this direction could be carried even beyond
this point, but we do not propose to pursue this subject any further.
19.11. More General Forms of Poker
19.11. While the discussions which we have now concluded throw a
good deal of light on the strategical structure and the possibilities of Poker,
they succeeded only due to our far reaching simplification of the rules of
the game. These simplifications were formulated and imposed in 19.1.,
19.3. and 19.7. For a real understanding of the game we should now
make an effort to remove them.
By this we do not mean that all the fanciful complications of the game
which we have eliminated (cf. 19.1.) must necessarily be reinstated, 1
1 Nor do we wish, yet to consider anything but a twoperson game!
208 ZEROSUM TWOPERSON GAMES: EXAMPLES
but some simple and important features of the game were equally lost and
their reconsideration would be of great advantage. We mean in particular:
(A) The " hands " should be discrete, and not continuous. (Cf. 19.7.)
(B) There should be more than two possible ways to bid. (Cf. 19.3.)
(C) There should be more than one opportunity for each player to bid,
and alternating bids, instead of simultaneous ones, should also be considered.
(Cf. 19.3.)
The problem of meeting these desiderata (A), (B), (C) simultaneously
and finding the good strategies is unsolved. Therefore we must be satis
fied for the moment to add (A), (B), (C) separately.
The complete solutions for (A) and for (B) are known, while for (C)
only a very limited amount of progress has been made. It would lead
too far to give all these mathematical deductions in detail, but we shall
report briefly the results concerning (A), (B), (C).
19.12. Discrete Hands
19.12.1. Consider first (A). I.e. let us return to the discrete scale of
hands s = 1, , S as introduced at the end of 19.1.2., and used in
19.419.7. In this case the solution is in many ways similar to that of
Figure 40. Generally pj = and there exists a certain 5 such that pf = 1
for s > s, while p\ j 0, 1 for s < s. Also, if we change to the z scale (cf.
s o __ } a ^
Fig. 39) , then ~ r is very nearly l So we have a zone of " Bluffing "
o 1 d
and above it a zone of "high" bids, just as in Fig. 40.
But the p\ for s < s, i.e. in the zone of " Bluffing," are not at all equal
to or near to the , , of Fig. 40. 2 They oscillate around this value by
amounts which depend on certain arithmetical peculiarities of S but do not
tend to disappear for S > oo . The averages of the p\ however, tend to
: r 3 In other words:
a + b
The good strategy of the discrete game is very much like the good
strategy of the continuous game: this is true for all details as far as the
division into two zones (of "Bluffing" and of "high" bids) is concerned;
also for the positions and sizes of these zones, and for the events in the zone
of "high" bids. But in the zone of "Bluffing" it applies only to average
statements (concerning several hands of approximately equal strength).
The precise procedures for individual hands may differ widely from those
* Precisely: %^l  ^^ for S * .
o 1 fl
b
2 I.e. not PI * 7 for S > < whatever the variability of .
1 b
* Actually (pf + P?* 1 )  T for most a < .
POKER AND BLUFFING 209
given in Figure 40, and depend on arithmetical peculiarities of 8 and S
(with respect to a/6). 1
19.12.2. Thus the strategy which corresponds more precisely to Figure 40
i.e. where p[ m for all a < s is not good, and it differs quite
considerably from the good one. Nevertheless it can be shown that the
maximal loss which can be incurred by playing this "average" strategy
is not great. More precisely, it tends to f or S >.*
So we see: In the discrete game the correct way of " Bluffing " has a
very complicated "fine structure," which however secures only an extremely
small advantage to the player who uses it.
This phenomenon is possibly typical, and recurs in much more compli
cated real games. It shows how extremely careful one must be in asserting
or expecting continuity in this theory. 3 But the practical importance
i.e. the gains and losses caused seems to be small, and the whole thing
is probably terra incognita to even the most experienced players.
19.13. m possible Bids
19.13.1. Consider, second, (B) : I.e. let us keep the hands continuous, but
permit bidding in more than two ways. I.e. we replace the two bids
a >6(>0)
by a greater number, say m, ordered:
ai > a 2 > > a m _i > a m (> 0).
In this case too the solution bears a certain similarity to that of Figure 40. 4
There exists a certain z 6 such that for z > Z Q the player should make the
highest bid, and nothing else, while for z < 2 he should make irregularly
various bids (always including the highest bid a, but also others), with
specified probabilities. Which bids he must make and with what proba
1 Thus in the equivalent of Figure 40, the left part of the figure will not be a straight
line (P " TT in S 2 ^ j, but one which oscillates violently around this
average.
1 It is actually of the order 1 /S. Remember that in real Poker S is about 2} millions.
(Cf. footnote 4 on p. 187.)
1 Recall in this connection the remarks made in the second part of footnote 4 on p.
198.
4 It has actually been determined only under the further restriction of the rules which
forbids "Seeing" a higher bid. I.e. each player is expected to make his final, highest bid
at once, and to "Pass" (and accept the consequences) if the opponent's bid should turn
out higher than his.
a  6
1 Analogue of the z in Figure 40.
210 ZEROSUM TWOPERSON GAMES: EXAMPLES
bilities, is determined by the value of z. 1 So we have a zone of "Bluffing"
and above it a zone of "high" bids actually of the highest bid and nothing
else just as in Figure 40. But the "Bluffing" in its own zone z g Z Q
has a much more complicated and varying structure than in Figure 40.
We shall not go into a detailed analysis of this structure, although
it offers some quite interesting aspects. We shall, however, mention one of
its peculiarities.
19.13.2. Let two values
a > b >
be given, and use them as highest and lowest bids:
di = dj a m == b.
Now let m > and choose the remaining bids a*,  , a m i so that they
fill the interval
(19:24) 6 g x ^a
with unlimited increasing density. (Cf. the two examples to be given in
footnote 2 below.) If the good strategy described above now tends to a
limit i.e. to an asymptotic strategy for m oo then one could interpret
this as a good strategy for the game in which only upper and lower bounds
are set for the bids (a and 6), and the bids can be anything in between (i.e.
in (19:24)). I.e. the requirement of a minimum interval between bids
mentioned at the beginning of 19.3. is removed.
Now this is not the case. E.g. we can interpolate the a 2 , , a m _i
between ai = a and a m = b both in arithmetic and in geometric sequence. 2
In both cases an asymptotic strategy obtains for m > but the two strate
gies differ in many essential details.
If we consider the game in which all bids (19:24) are permitted, as one
in its own right, then a direct determination of its good strategies is possible.
1 If the bids which he must make are ai, a p , a fl , , a n (l < p < q < < n),
then it can be shown that their probabilities must be
111 If 1,1, , 1
, ,_,... , I c  j  h +
ca\ ca p ca q ca n V at a p a,,
respectively. I.e. if a certain bid is to be made at all, then its probability must be
inversely proportional to the cost.
Which Op, a c , a m actually occur for a given z is determined by a more com
plicated criterion, which we shall not discuss here.
Observe that the c above was needed only to make the sum of all probabilities
equal to 1. The reader may verify for himself that the probabilities in Figure 40 have
the above values.
1 The first one is defined by
a,   T ((m  p)a + (p  1)6) for p  1, 2, , m  1, m
m i
the second one is defined by
a p m ~a m ~*b*~ l for p 1, 2, , m 1, m.
POKER AND BLUFFING 211
It turns out that both strategies mentioned above are good, together with
many others.
This chows to what complications the abandonment of a minimum inter
val between bids can lead: a good strategy of the limiting case cannot be an
approximation for the good strategies of all nearby cases with a finite num
ber of bids. The concluding remarks of 19.12. are thus reemphasized.
19.14. Alternate Bidding
19.14.1. Third and last, consider (C) : The only progress so far made in
this direction is that we can replace the simultaneous bids of the two players
by two successive ones; i.e. by an arrangement in which player 1 bids first
and player 2 bids afterwards.
Thus the rules stated in 19.4. are modified as follows:
First each player obtains, by a chance move, his hand 8 = 1, , S,
each one of these numbers having the same probability l/S. We denote
the hands of players 1, 2, by Si, 2 respectively.
After this 1 player 1 will, by a personal move, choose either a or 6, the
"high" or the "low" bid. 2 He does this informed about his own hand
but not about the opponent's hand. If his bid is "low," then the play is
concluded. If his bid is "high," then player 2 will, by a personal move,
choose either a or 6, the "high " or the "low " bid. 3 He does this informed
about his own hand, and about the opponent's choice, but not his hand.
This is the play. When it is concluded, the payments are made as
>
follows: If player 1 bids "low," then for si = s 2 player 1 obtains from player
<
b >
2 the amount respectively. If both players bid " high," then for Si = s 2
6 <
a
player 1 obtains from player 2 the amount respectively. If player 1
a
bids "high" and player 2 bids "low," then player 1 obtains from player 2
the amount b. 4
19.14.2. The discussion of the pure and mixed strategies can now be
carried out, essentially as we did for our original variant of Poker in 19.5.
We give the main lines of this discussion in a way which will be per
fectly clear for the reader who remembers the procedure of 19.4.19.7.
A pure strategy in this game consists clearly of the following specifica
tions: to state for every hand s = 1, , S whether a "high" or a "low"
bid will be made. It is simpler to describe this by a numerical index
i. = 1, 2; t. = 1 meaning a "high" bid, i. = 2 meaning a "low" bid. Thus
1 We continue from here on as if player 2 had already made the "low" bid, and this
were player 1's turn to "See" or to "Overbid." We disregard "Passing" at this stage.
1 I.e. "Overbid" or "See," cf. footnote 1 above.
8 I.e. "See" or "Pass." Observe the shift of meaning since footnote 2 above.
4 In interpreting these rules, recall the above footnotes. From the formalist ic point
of view, footnote 1 on p. 191 should be recalled, mutatis mutandis.
212
ZEROSUM TWOPERSON GAMES: EXAMPLES
the strategy is a specification of such an index t, for every s = 1, , S
i.e. of a sequence t'i, , is.
This applies to both players 1 and 2; accordingly we shall denote the
above strategy by 2i(i i, , ia) or 2*(ji y , ja). Thus each player
has the same number of strategies, as many as there are sequences
i, , <*; i.e. precisely 2 5 . With the notations of 11.2.2.
0i  A  ft = 2*.
(But the game is not symmetrical !)
We must now express the payment which player 1 receives if the strate
gies 2i(ti, , is), *0*i, * , ja) are used by the two players. This is
the matrix element OC(t'i, , ia\ji 9 , ja). If the players have
actually hands *i, Sj then the payment received by player 1 can be expressed
in this way (using the rules stated above): It is JB^nc^*,)^, j f ) where
sgn(si *) is the sign of 81 s and where the three functions
can be represented by the following matrix schemes:
t \
1
2
1
a
6
2
b
b
\j
1
2
1
6
2
\j
t \
1
2
1
a
&
2
b
6
Figure 42. Figure 43. Figure 44,
Now Si, Sj originate from chance moves, as described above. Hence:
S
, ja)
 1
19.14.3. We now pass to the mixed strategies in the sense of 17.2. These
>
are vectors , 17 belonging to S a . We must index the components of these
vectors like the (pure) strategies: we must write ^ ..... <f> ^ ..... <f instead
Of fe,, 1?r,.
We express (17:2) of 17.4.1. which evaluates the expectation of player 1's
gain
.....
'tis, j'i,
t. ,ta excluding t
POKER AND BLUFFING 213
There is an advantage in interchanging the two Z and writing
> i VA
K( , ? ) =  2 2,
pj
If we now put
(^25) P',= ...... V
(19:26) < =
J. ' i ja excluding ; j
then the above equation becomes l
(19:27) K(7, 7) = gi
19.14.4. All this is precisely as in 19.5.2. As there, (19:25) shows that
p* is the probability that player 1, using the mixed strategy will choose i
when his hand is s\. (19:26) shows that a',* is the probability that player 2,
using the mixed strategy 77 will choose j when his hand is s 2 . It is again
>
clear intuitively that the expectation value K( , 17 ) depends upon these
probabilities only, and not on the underlying probabilities {^ ..... s , 17^,. ____ , a
themselves. (19:27) expresses this and could have easily been derived
directly, on this basis.
It is also clear, both from the meaning of the p', <rj and from their formal
definitions (19:25), (19:26), that they fulfill the conditions:
2
(19:28) all p}. V p}i = 1
i
2
(19:29) alUji V <rj = 1,
;i
and that any p's <r} which fulfill these conditions can be obtained from
suitable T> V by (19:25), (19:26). (Cf. the corresponding step in 19.5.?.
particularly footnote 1 on p. 194.) It is therefore opportune to form the
2dimensional vectors
P * !  {pfs Pi'K * *' = (*i'> ^J 1 )
Then (19:28), (19:29) state precisely that all 7\ ^ ' belong to S.
214 ZEROSUM TWOPERSON GAMES: EXAMPLES
Thus (or 17 ) was a vector in 8$ i.e. depending on ft 1 = 2 s 1
constants; the p * (or <r ') are S vectors in St i.e. each one depends on one
numerical constant, hence they amount together to S numerical constants.
So we have reduced 2 s 1 to S. (Cf . the end of 19.5.3.)
19.14.5. We now rewrite (19:27) as in 19.6.
(19:30) K(7s ' ' , 7V\ , 7 s ) = ^ V 7;*;',
with the coefficients
I
S'
i.e. using the matrix schemes of Figures 4244,
,! S
(19:31:a) 7l = g
,i a
 * I V V M)
Since the game is no longer symmetric, we need also the corresponding
formulae in which the roles of the two players are interchanged. This is:
(19:32) K( p ', , p V S ' ' ' , O = g
with the coefficients
H* .81
S * 02
o
i.e. using the matrix schemes of Figures 4244,
il 8
(19:33:a) J. = g
(19:33:b)  g ^^ + MO +
The criteria for good strategies are now essentially repetitions of those in
19.6. I.e. due to the asymmetry of the variant now under consideration
our present criterion will be obtained from the general criterion (17:D)
POKER AND BLUFFING 215
of 17.9. in the same way as that of 19.6. could be obtained from the sym
metrical criterion at the end of 17.11.2. I.e. :
(19:G) The 7 l , , 7* and the 7 l , , 7* they all belong
to /Si describe good strategieefif and only if this is true:
For each t, j, for which 7* does not assume its minimum
(in j l ) we have crj = 0. For each i, i for which 6J does not
assume its maximum (in i *) we have pj = 0.
19.14.6. Now we replace the discrete hands *i, *i by continuous ones,
in the sense of 19.7. (Cf. in particular Figure 39 there.) As described in
19.7. this replaces the vectors p \ <r (si t t = 1, , S) by vectors
p *, <r *(0 ^ *i,2s ^ 1), which are still probability vectors of the same nature
as before, i.e. belonging to Sj. So the components pji, <r* make place for the
components pji, <rjt. Similarly the 5*, 7j become 5*, 7j. The sums in our
formulae (19:30), (19:31 :a), (19:31:b), and (19:32), (19:33:a), (19:33:b) go
over into integrals, just as in (19:7*), (19:9 :a*), (19:9:b*), (19:9:c*) in 19.7.
So we obtain:
(19:30*) K
(19:31 *) y\> = /'' (ap.  bp'.Odz, + f ' (apj. + 6p{>)(b t
^0 / *t
(19:31 :b*) T! = f" 0>ti l ~ b P'
/o
and
(19:32*)
(19:33:a*) ',.
(19:33:b*) <; = f" (hrjt + bff' t >)dz t + f 1 (bo*?  fo;)<bi.
y ' *t
Our criterion for good strategies is now equally transformed. (This is the
same transition as that from the discrete criterion of 19.6. to the continuous
criterion of 19.7.) We obtain
(19 :H) The p f i and the <r ' (0 g * Ip zi g 1) they all belong to Si
describe good strategies if and only if this is true:
For each *s, j for which 751 does not assume its minimum
(in j 2 ) we have <r* = 0. For each * lf i for which $Ji does not
assume its maximum (in i 2 ) we have pj = 0.
* We mean in j (t) and not in **, ,; (t i, i) !
s We mean in j (t) and not in z s , ; (ci> j) !
216
ZEROSUM TWOPERSON GAMES: EXAMPLES
19.15. Mathematical Description of All Solutions
19.15.1. The determination of the good strategies p ' and a *, i.e. of the
solutions of the implicit condition stated at the end of 19.14., can be carried
out completely. The mathematfcal methods which achieve this are similar
to those with which we determined in 19.8. the good strategies of our original
variant of Poker, i.e. the solutions of the implicit condition stated at the
end of 19.7.
We shall not give the mathematical discussion here, but we shall describe
the good strategies p * and a * which it produces.
There exists one and only one good strategy p * while the good strategies
<r form an extensive family. (Cf . Figures 4546. The actual proportions
of these figures correspond to a/6 ~ 3.)
Figure 45.
Figure 46.
(o  b)b
" a(a + 36)
a + 2a6  6*
a (a + 36)
 plot the curves p = p\ and a = a\ respectively. Thus the
The lines 
height of above the line p = (a = 0) is the probability of a "high"
bid, p\ (or}) ; the height of the line p = 1 (<r = 1) above is the probability
of a "low" bid, pj = 1  pj fa = 1  <rj). The irregular
part of the v = <r* curve (in Figure 46) in the interval u ^ z ^ v represents
the multiplicity of the good strategies a z : Indeed, this part of the o = a\
curve is subject to the following (necessary and sufficient) conditions:
I a\dz
v  *<> J..
b
a
b
a
when
when
u
U < 2 < V.
Verbally: Between u and v the average of <rj is 6/a, and on any right end
of this interval the average of a\ is j 6/a.
POKER AND BLUFFING
217
Thus both p * and a * exhibit three different types of behavior on these
three intervals: 1
First :0 g z < u. Second :u <> z v. Third :v < z ^ 1. The lengths
of these three intervals are u, v u, 1 v, and the somewhat complicated
expressions for M, v can be best remembered with the help of these easily
verified ratios:
tt:l t; = a b:a + b
v u:l t; = a:b.
19.15.2. The formulae (19:31:a*), (19:31:b*) and (19:33:a*), (19:33:b*)
of 19.14.6. permit us now to compute the coefficients ?J, 3J. We give (as in
19.9. in Figure 41) the graphical representations, instead of the formulae,
leaving the elementary verification to the reader. For identification of the
p *, <r * as good strategies only the differences, t\ 6J, y\ y\ matter: Indeed,
the criterion at the end of 19.14. can be formulated as stating that whenever
this 'difference is > then p = or <r = respectively, and that whenever
Figure 47. Figure 48.
tg a = 2a, tg ft  26, tg y  2 (a  b)
this difference is < then pj = or <rj = respectively. We give therefore
the graphs of these differences. (Cf. Figures 47, 48. The actual propor
tions are those of Figures 45, 46; i.e. a/6 ~ 3, cf. there.)
The line plots the curve y = y\ yj; the line
plots the curve 5 = &{ b\. The irregular part of the d = $5  d' 2 curve
(in Figure 48) in the interval u ^ z ^ v corresponds to the similarly
irregular part of the <r = oJ curve (in Figure 46) in the same interval,
i.e. it also represents the multiplicity of the good strategies a '. The restric
tion to which that part of the o = a\ curve is subjected (cf. the discussion
after Figure 46) means that this part of the 6 = b\ 5J curve must lie
within the shaded triangle ///////// (cf. Figure 48).
19.15.3. Comparison of Figure 45 with Figure 47, and of Figure 46
with Figure 48 shows that our strategies are indeed good, i.e. that they
fulfill (19:H). We leave it to the reader to verify this, in analogy with the
comparison of Figure 40 and Figure 41 in 19.9.
1 Concerning the endpoints of these intervals, etc., cf. footnote 3 on p. 200.
218 ZEROSUM TWOPERSON GAMES: EXAMPLES
The value of K can also be obtained from (19:30*) or (19:32*) in 19.14.6.
The result is:
Thus player 1 has a positive expectation value for the play, i.e. an advan
tage 2 which is plausibly imputable to his possessing the initiative.
19.16. Interpretation of the Solutions. Conclusions
19.16.1. The results of 19.15. should now be discussed in the same way
as those of 19.8., 19.9. were in 19.10. We do not wish to do this at full
length, but just to make a few remarks on this subject.
We see that instead of the two zones of Figure 40 three zones appear
in Figures 45, 46. The highest one (farthest to the right) corresponds to
"high" bids, and nothing else, in all these figures (i.e. for both players).
The behavior of the other zones, however, is not so uniform.
For player 2 (Figure 46) the middle zone describes that kind of " Bluff
ing" which we had on the lowest zone in Figure 40, irregular "high" and
"low" bids on the same hand, But the probabilities, while not entirely
arbitrary, are not uniquely determined as in Figure 40. 3 And there exists
a lowest zone (in Figure 46) where player 2 must always bid "low," i.e.
where his hand is too weak for that mixed conduct.
Furthermore, in player 2's middle zone the 7* show the same indifference
as in Figure 41 y\  y{ = there, both in Figure 41 and in Figure 47
so the motives for his conduct in this zone are as indirect as those discussed
in the last part of 19.10. Indeed, these "high" bids are more of a defense
against "Bluffing," than "Bluffing" proper. Since this bid of player 2
concludes the play, there is indeed no motive for the latter, while there is a
need to put a rein on the opponent's "Bluffing" by occasional "high"
bids, by "Seeing" him.
For player 1 (Figure 45) the situation is different. He must bid " high,"
and nothing else, in the lowest zone; and bid "low," and nothing else, in
the middle zone. These "high" bids on the very weakest hands while
the bid on the medium hands is "low" are aggressive "Bluffing" in its
1 For numerical orientation: If a/6 = 3, which is the ratio on which all our figures are
based, then u J, v J and K = 
1 For a/6 ~ 3 this is about 6/9 (cf. footnote 1 above), i.e. about 11 per cent, of the
"low "bid.
1 Cf. the discussion after Figure 46. Indeed, it is even possible to meet those require
ments with o\ and 1 only; e.g. aj in the lower  fraction and a\ * 1 in the
6
upper  fraction of the middle interval.
The existence of such a solution (i.e. never o\ ^ 0, 1, by Figure 45 never p[ y* 0, 1
either) means, of course, that this variant is strictly determined. But a discussion on
that basis (i.e. with pure strategies) will not disclose solutions like the one actually drawn
in Figure 46.
POKER AND BLUFFING 219
purest form. The d* are not at all indifferent in this zone of "Bluffing"
(i.e. the lowest zone) : t>\ 6J > there in Figure 48. i.e. any failure to
"Bluff" under these conditions leads to instant losses.
19.16.2. Summing up: Our new variant of Poker distinguishes two
varieties of "Bluffing": the purely aggressive one practiced by the player
who has the initiative; and a defensive one "Seeing" irregularly, even
with a medium hand, the opponent who is suspected of "Bluffing"
practiced by the player who bids last. Our original variant where the
initiative was split between the two players because they bid simultane
ously contained a procedure which we can now recognize as a mixture of
these two things. 1
All this gives valuable heuristic hints how real Poker with longer
sequences of (alternating) bids and overbids ought to be approached.
The mathematical problem is difficult, but probably not beyond the reach of
the techniques that are available. It will be considered in other publications.
1 The variant of E. Borel, referred to in footnote 2 on p. 186, is treated loc. cit. in a
way which bears a certain resemblance to our procedure. Using our terminology, the
course of E. Borel can be described as follows:
The MaxMin (Max for player 1, Min for player 2) is determined both for pure and
for mixed strategies. The two are identical, i.e. this variant is strictly determined.
The good strategies which are obtained in this way are rather similar to those of our
Figure 46. Accordingly the characteristics of "Bluffing" do not appear as clearly as in
our Figures 40 and 45. Of. the analogous considerations in the text above.
CHAPTER V
ZEROSUM THREEPERSON GAMES
20. Preliminary Survey
20.1. General Viewpoints
20.1.1. The theory of the zerosum twoperson game having been
completed, we take the next step in the sense of 12.4. : We shall establish
the theory of the zerosum threeperson game. This will bring entirely
new viewpoints into play. The types of games discussed thus far have
had also their own characteristic problems. We saw that the zerosum
oneperson game was characterized by the emergence of a maximum problem
and the zerosum twoperson game by the clear cut opposition of interest
which could no longer be described as a maximum problem. And just as
the transition from the oneperson to the zerosum twoperson game
removed the pure maximum character of the problem, so the passage from
the zerosum twoperson game to the zerosum threeperson game obliterates
the pure opposition of interest.
20.1.2. Indeed, it is apparent that the relationships between two players
in a zerosum threeperson game can be manifold. In a zerosum two
person game anything one player wins is necessarily lost by the other and
vice versa, so there is always an absolute antagonism of interests. In a
zerosum threeperson game a particular move of a player which, for the
sake of simplicity, we assume to be clearly advantageous to him may be
disadvantageous to both other players, but it may also be advantageous
to one and (a fortiori) disadvantageous to the other opponent. 1 Thus some
players may occasionally experience a parallelism of interests and it may be
imagined that a more elaborate theory will have to decide even whether
this parallelism is total, or partial, etc. On the other hand, opposition of
interest must also exist in the game (it is zerosum) and so the theory
will have to disentangle the complicated situations which may ensue.
It may happen, in particular, that a player has a choice among various
policies: That he can adjust his conduct so as to get into parallelism of
interest with another player, or the opposite; that he can choose with
which of the other two players he wishes to establish such a parallelism, and
(possibly) to what extent.
1 All this, of course, is subject to all the complications and difficulties which we have
already recognized and overcome in the zerosum twoperson game: whether a particular
move is advantageous or disadvantageous to a certain player may not depend on that
move alone, but also on what other players do. However, we are trying first to isolate
the new difficulties and to analyze them in their purest form. Afterward we shall dis
cuss the interrelation with the old difficulties.
220
PRELIMINARY SURVEY 221
20.1.3. As soon as there is a possibility of choosing with whom to
establish parallel interests, this becomes a case of choosing an ally. When
alliances are formed, it is to be expected that some kind of a mutual under
standing between the two players involved will be necessary. One can
also state it this way: A parallelism of interests makes a cooperation
desirable, and therefore will probably lead to an agreement between the
players involved. An opposition of interests, on the other hand, requires
presumably no more than that a player who has elected this alternative
act independently in his own interest.
Of all this there can be no vestige in the zerosum twoperson game.
Between two players, where neither can win except (precisely) the other's
loss, agreements or understandings are pointless. 1 This should be clear by
common sense. If a formal corroboration (proof) be needed, one can
find it in our ability to complete the theory of the zerosum twoperson
game without ever mentioning agreements or understandings between
players.
20.2. Coalitions
20.2.1. We have thus recognized a qualitatively different feature of the
zerosum threeperson game (as against the zerosum twoperson game).
Whether it is the only one is a question which can be decided only later.
If we succeed in completing the theory of the zerosum threeperson game
without bringing in any further new concepts, then we can claim to have
established this uniqueness. This will be the case essentially when we reach
23.1. For the moment we simply observe that this is a new major element
in the situation, and we propose to discuss it fully before taking up anything
else.
Thus we wish to concentrate on the alternatives for acting in cooperation
with, or in opposition to, others, among which a player can choose. I.e. we
want to analyze the possibility of coalitions the question between which
players, and against which player, coalitions will form. 2
1 This is, of course, different in a general twoperson game (i.e. one with variable sum) :
there the two players may conceivably cooperate to produce a greater gain. Thus there
is a certain similarity between the general twoperson game and the zerosum three
person game.
We shall see in Chap. XI, particularly in 56.2.2., that there is a general connection
behind this: the general nperson game is closely related to the zerosum n + I person
game.
1 The following seems worth noting: coalitions occur first in a zerosum game when
the number of participants in the game reaches three. In a twoperson game there are
not enough players to go around: a coalition absorbs at least two players, and then
nobody is left to oppose. But while the threeperson game of itself implies coalitions,
the scarcity of players is still such as to circumscribe these coalitions in a definite way: a
coalition must consist of precisely two players and be directed against precisely one (the
remaining) player.
If there are four or more players, then the situation becomes considerably more
involved, several coalitions may form, and these may merge or oppose each other, etc.
Some instances of this appear at the end of 36.1.2., et seq., the end of 37.1.2., et seq.;
another allied phenomenon at the end of 38.3.2.
222 ZEROSUM THREEPERSON GAMES
Consequently it is desirable to form an example of a zerosum three
person game in which this aspect is foremost and all others are suppressed;
i.e., a game in which the coalitions are the only thing that matters, and the
only conceivable aim of all players. 1
20.2.2. At this point we may mention also the following circumstance:
A player can at best choose between two possible coalitions, since there are
two other players either of whom he may try to induce to cooperate with him
against the third. We shall have to elucidate by the study of the zerosum
threeperson game just how this choice operates, and whether any particular
player has such a choice at all. If, however, a player has only one possi
bility of forming a coalition (in whatever way we shall in fine interpret this
operation) then it is not quite clear in what sense there is a coalition at all:
moves forced upon a player in a unique way by the necessities of the rules
of the game are more in the nature of a (one sided) strategy than of a (cooper
ative) coalition. Of course these considerations are rather vague and
uncertain at the present stage of our analysis. We bring them up neverthe
less, because these distinctions will turn out to be decisive.
It may also seem uncertain, at this stage at least, how the possible
choices of coalitions which confront one player are related to those open to
another; indeed, whether the existence of several alternatives for one player
implies the same for another.
21. The Simple Majority Game of Three Persons
21.1. Description of the Game
21.1. We now formulate the example mentioned above: a simple zero
sum threeperson game in which the possibilities of understandings i.e.
coalitions between the players are the only considerations which matter.
This is the game in question:
Each player, by a personal move, chooses the number of one of the
two other players. 2 Each one makes his choice uninformed about the
choices of the two other players.
After this the payments will be made as follows: If two players have
chosen each other's numbers we say that they form a couple* Clearly
1 This is methodically the same device as our consideration of Matching Pennies in
the theory of the zerosum twoperson game. We had recognized in 14.7.1. that the
decisive new feature of the zerosum twoperson game was the difficulty of deciding
which player "finds out" his opponent. Matching Pennies was the game in which this
" finding out" dominated the picture completely, where this mattered and nothing else.
2 Player 1 chooses 2 or 3, player 2 chooses 1 or 3, player 3 chooses 1 or 2.
* It will be seen that the formation of a couple is in the interest of the players who
create it. Accordingly our discussion of understandings and coalitions in the paragraphs
which follow will show that the players combine into a coalition in order to be able to
form a couple. The difference between the concepts of a couple and a coalition neverthe
less should not be overlooked : A couple is a formal concept which figures in the set of rules
of the game which we define now; a coalition is a notion belonging to the theory concern
ing this game (and, as will be seen, many other games).
THE SIMPLE MAJORITY GAME 223
there will be precisely one couple, or none at all. 1  2 If there is precisely one
couple, then the two players who belong to it get onehalf unit each, while
the third (excluded) player correspondingly loses one unit. If there is no
couple, then no one gets anything. 8
The reader will have no difficulty in recognizing the actual social proc
esses for which this game is a highly schematized model. We shall call it
the simple majority game (of three players).
21.2. Analysis of the Game. Necessity of "Understandings"
21.2.1. Let us try to understand the situation which exists when the
game is played.
To begin with, it is clear that there is absolutely nothing for a player
to do in this game but to look for a partner, i.e. for another player who
is prepared to form a couple with him. The game is so simple and abso
lutely devoid of any other strategic possibilities that there just is no occasion
for any other reasoned procedure. Since each player makes his personal
move in ignorance of those of the others, no collaboration of the players
can be established during the course of the play. Two players who wish to
collaborate must get together on this subject before the play, i.e. outside
the game. The player who (in making his personal move) lives up to his
agreement (by choosing the partner's number) must possess the conviction
that the partner too will do likewise. As long as we are concerned only
with the rules of the game, as stated above, we are in no position to judge
what the basis for such a conviction may be. In other words what, if
anything, enforces the "sanctity" of such agreements? There may be
games which themselves by virtue of the rules of the game as defined in
6.1. and 10.1. provide the mechanism for agreements and for their enforce
ment. 4 But we cannot base our considerations on this possibility, since a
game need not provide this mechanism; the simple majority game described
above certainly does not. Thus there seems to be no escape from the
necessity of considering agreements concluded outside the game. If we do
not allow for them, then it is hard to see what, if anything, will govern the
conduct of a player in a simple majority game. Or, to put this in a some
what different form :
1 I.e. there cannot be simultaneously two different couples. Indeed, two couples
must have one player on common (since there are only three players), and the number
chosen by this player must be that of the other player in both couples, i.e. the two
couples are identical.
2 It may happen that no couples exist: e.g., if 1 chooses 2, 2 chooses 3, and 3 chooses 1.
3 For the sake of absolute formal correctness this should still be arranged according
to the patterns of 6. and 7. in Chap. II. We leave this to the reader, as in the analogous
situation discussed in footnote 1 on p. 191.
4 By providing personal moves of one player, about which only one other player is
informed and which contain (possibly conditional) statements of the first player's future
policy; and by prescribing for him to adhere subsequently to these statements, or by
providing (in the functions which determine the outcome of a game) penalties for the
nonadherence.
224 ZEROSUM THREEPERSON GAMES
We are trying to establish a theory of the rational conduct of the partici
pants in a given game. In our consideration of the simple majority game
we have reached the point beyond which it is difficult to go in formulating
such a theory without auxiliary concepts such as " agreements/' "under
standings/ 1 etc. On a later occasion we propose to investigate what theo
retical structures are required in order to eliminate these concepts. For
this purpose the entire theory of this book will be required as a foundation,
and the investigation will proceed along the lines indicated in Chapter XII,
and particularly in 66. At any rate, at present our position is too weak and
our theory not sufficiently advanced to permit this "selfdenial." We shall
therefore, in the discussions which follow, make use of the possibility of the
establishment of coalitions outside the game; this will include the hypothesis
that they are respected by the contracting parties.
21.2*2. These agreements have a certain amount of similarity with
" conventions " in some games like Bridge with the fundamental difference,
however, that those affected only one " organization " (i.e. one player split
into two "persons") while we are now confronted with the relationship of
two players. At this point the reader may reread with advantage our
discussion of "conventions" and related topics in the last part of 6.4.2. and
6.4.3., especially footnote 2 on p. 53.
21.2.3. If our theory were applied as a statistical analysis of a long series
of plays of the same game and not as the analysis of one isolated play an
alternative interpretation would suggest itself. We should then view
agreements and all forms of cooperation as establishing themselves by
repetition in such a long series of plays.
It would not be impossible to derive a mechanism of enforcement from
the player's desire to maintain his record and to be able to rely on the record
of his partner. However, we prefer to view our theory as applying to an
individual play. But these considerations, nevertheless, possess a certain
significance in a virtual sense. The situation is similar to the one which
we encountered in the analysis of the (mixed) strategies of a zerosum two
person game. The reader should apply the discussions of 17.3. mutatis
mutandis to the present situation.
21.3. Analysis of the Game : Coalitions. The Role of Symmetry
21.3. Once it is conceded that agreements may exist between the players
in the simple majority game, the path is clear. This game offers to players
who collaborate an infallible opportunity to win and the game does not
offer to anybody opportunities for rational action of any other kind. The
rules are so elementary that this point ought to be fully convincing.
Again the game is wholly symmetric with respect to the three players.
That is true as far as the rules of the game are concerned : they do not offer
to any player any possibility which is not equally open to any other player.
What the players do within these possibilities is, of course, another matter.
Their conduct may be unsymmetric; indeed, since understandings, i.e. coali
FURTHER EXAMPLES 225
tions, are sure to arise, it will of necessity be unsymmetric. Among
the three players there is room for only one coalition (of two players) and
one player will necessarily be left out. It is quite instructive to observe how
the rules of the game are absolutely fair (in this case, symmetric), but the
conduct of the players will necessarily not be. 1 * 2
Thus the only significant strategic feature of this game is the possibility
of coalitions between two players. 8 And since the rules of the game are
perfectly symmetrical, all three possible coalitions 4 must be considered
on the same footing. If a coalition is formed, then the rules of the game
provide that the two allies get one unit from the third (excluded) player
each one getting onehalf unit.
Which of these three possible coalitions will form, is beyond the scope
of the theory, at least at the present stage of its development. (Cf . the end
of 4.3.2.) We can say only that it would be irrational if no coalitions were
formed at all, but as to which particular coalition will be formed must depend
on conditions which we have not yet attempted to analyze.
22. Further Examples
22.1. Unsymmetric Distribution. Necessity of Compensations
22.1.1. The remarks of the preceding paragraphs exhaust, at least for
the time being, the subject of the simple majority game. We must now
begin to remove, one by one, the extremely specializing assumptions which
characterized this game: its very special nature was essential for us in
order to observe the role of coalitions in a pure and isolated form in vitro
1 We saw in 17.11.2. that no such thing occurs in the zerosum twoperson games.
There, if the rules of the game are symmetric, both players get the same amount (i.e.
the value of the game is zero), and both have the same good strategies. I.e. there is no
reason to expect a difference in their conduct or in the results which they ultimately
obtain.
It is on emergence of coalitions when more than two players are present and of
the "squeeze" which they produce among the players, that the peculiar situation
described above arises. (In our present case of three players the "squeeze" is due to the
fact that each coalition can consist of only two players, i.e. less than the total number of
players but more than onehalf of it. It would be erroneous, however, to assume that no
such "squeeze" obtains for a greater number of players.)
* This is, of course, a very essential feature of the most familiar forms of social organi
zations. It is also an argument which occurs again and again in the criticism directed
against these institutions, most of all against the hypothetical order based upon "Jowser
faire." It is the argument that even an absolute, formal fairness symmetry of the rules
of the game does not guarantee that the use of these rules by the participants will be
fair and symmetrical. Indeed, this "does not guarantee" is an understatement: it is to
be expected that any exhaustive theory of rational behavior will show that the partici
pants are driven to form coalitions in unsymmetric arrangements.
To the extent to which an exact theory of these coalitions is developed, a real under
standing of this classical criticism is achieved. It seems worth emphasizing that this
characteristically "social" phenomenon occurs only in the case of three or more
participants.
5 Such a coalition is in this game, of course, simply an agreement to choose each other's
numbers, so as to form a couple in the sense of the rules. This situation was foreseen
already at the beginning of 4.3.2.
4 Between players 1,2; 1,3; 2,3.
226 ZEROSUM THREEPERSON GAMES
but now this step is completed. We must begin to adjust our ideas to
more general situations.
22.1.2. The specialization which we propose to remove first is this: In
the simple majority game any coalition can get one unit from the opponent;
the rules of the game provide that this unit must be divided evenly
among the partners. Let us now consider a game in which each coalition
offers the same total return, but where the rules of the game provide for a
different distribution. For the sake of simplicity let this be the case only in
the coalition of players 1 and 2, where player 1, say, is favored by an amount e.
The rules of the modified game are therefore as follows:
The moves are the same as in the simple majority game described in
21.1. The definition of a couple is the same too. If the couple 1,2 forms,
then player 1 gets the amount + l , player 2 gets the amount e, and
player 3 loses one unit. If any other couple forms (i.e. 1,3 or 2,3) then
the two players which belong to it get onehalf unit each while the third
(excluded) player loses one unit.
What will happen in this game?
To begin with, it is still characterized by the possibility of three coalitions
corresponding to the three possible couples which may arise in it.
Prima facie it may seem that player 1 has an advantage, since at least in his
couple with player 2 he gets more by than in the original, simple majority
game.
However, this advantage is quite illusory. If player 1 would really
insist on getting the extra e in the couple with player 2, then this would
have the following consequence: The couple 1,3 would never form, because
the couple 1,2 is more desirable from 1's point of view; the couple 1,2 would
never form, because the couple 2,3 is more desirable from 2's point of view;
but the couple 2,3 is entirely unobstructed, since it can be brought about
by a coalition of 2,3 who then need pay no attention to 1 and his special
desires. Thus the couple 2,3 and no other will form ; and player 1 will not
get i + e nor even onehalf unit, but he will certainly be the excluded player
and lose one unit.
So any attempt of player 1 to keep his privileged position in the couple
1,2 is bound to lead to disaster for him. The best he can do is to take
steps which make the couple 1,2 just as attractive for 2 as the competing
couple 2,3. That is to say, he acts wisely if, in case of the formation of
a couple with 2, he returns the extra to his partner. It should be noted
that he cannot keep any fraction of e; i.e., if he should try to keep an extra
amount e' for himself, 2 then the above arguments could be repeated literally
with ' in place of e. 3
1 It seems natural to assume < e < J.
2 We mean of course <'<.
8 So the motives for player 1's ultimate disaster the certain formation of couple
2,3 would be weaker, but the disaster the same and just as certain as before. Cf. in
this connection footnote 1 on p. 228.
FURTHER EXAMPLES 227
22.1.3. One could try some other variations of the original, simple,
majority game, still always maintaining that the total value of each coalition
is one unit. E.g. we could consider rules where player 1 gets the amount
i + in each couple 1,2, 1,3; while players 2 and 3 split even in the couple
2,3. In this case neither 2 nor 3 would care to cooperate with 1 if 1 should
try to keep his extra or any fraction thereof. Hence any such attempt
of player 1 would again lead with certainty to a coalition of 2,3 against him
and to a loss of one unit.
Another possibility would be that two players are favored in all couples
with the third: e.g. in the couples 1,3 and 2,3, players 1 and 2 respectively
get i + while 3 gets only i ; and in the couple 1,2 both get onehalf
unit each. In this case both players 1 and 2 would lose interest in a coalition
with each other, and player 3 will become the desirable partner for each of
them. One must expect that this will lead to a competitive bidding for his
cooperation. This must ultimately lead to a refund to player 3 of the extra
advantage e. Only this will bring the couple 1,2 back into the field of
competition and thereby restore equilibrium.
22.1.4. We leave to the reader the consideration of further variants,
where all three players fare differently in all three couples. Furthermore
we shall not push the above analysis further, although this could be done
and would even be desirable in order to answer some plausible objections.
We are satisfied with having established some kind of a general plausibility
for our present approach which can be summarized as follows: It seems
that what a player can get in a definite coalition depends not only on what
the rules of the game provide for that eventuality, but also on the other
(competing) possibilities of coalitions for himself and for his partner. Since
the rules of the game are absolute and inviolable, this means that under
certain conditions compensations must be paid among coalition partners;
i.e. that a player must have to pay a welldefined price to a prospective
coalition partner. The amount of the compensations will depend on what
other alternatives are open to each of the players.
Our examples above have served as a first illustration for these principles.
This being understood, we shall now take up the subject de novo and in
more generality, and handle it in a more precise manner. l
22.2. Coalitions of Different Strength. Discussion
22.2.1. In accordance with the above we now take a far reaching step
towards generality. We consider a game in which this is the case:
If players 1,2 cooperate, then they can get the amount c, and no more,
from player 3; if players 1,3 cooperate, they can get the amount 6, and no
more, from player 2; if players 2,3 cooperate, they can get the amount a, and
no more, from player 1.
1 This is why we need not analyze any further the heuristic arguments of this para
graph the discussion of the next paragraphs takes care of everything.
All these possibilities were anticipated at the beginning of 4.3.2. and in 4.3.3.
228 ZEROSUM THREEPERSON GAMES
We make no assumptions whatsoever concerning further particulars
about the rules of this game. So we need not describe by what steps of
what order of complication the above amounts are secured. Nor do we
state how these amounts are divided between the partners, whether and how
either partner can influence or modify this distribution, etc.
We shall nevertheless be able to discuss this game completely. But
it will be necessary to remember that a coalition is probably connected
with compensations passing between the partners. The argument is as
follows:
22.2.2. Consider the situation of player 1. He can enter two alternative
coalitions: with player 2 or with player 3. Assume that he attempts to
retain an amount x under all conditions. In this case player 2 cannot
count upon obtaining more than the amount c x in a coalition with
player 1. Similarly player 3 cannot count on getting more than the
amount 6 x in a coalition with player 1. Now if the sum of these upper
bounds i.e. the amount (c x) + (b x) is less than what players 2
and 3 can get by combining with each other in a coalition, then we may
safely assume that player 1 will find no partner. 1 A coalition of 2 and 3
can obtain the amount a. So we see: If player 1 desires to get an amount x
under all conditions, then he is disqualified from any possibility of finding
a partner if his x fulfills
(c  x) + (b  x) < a.
I.e. the desire to get x is unrealistic and absurd unless
(c  x) + (b  x) ^ a.
This inequality may be written equivalently as
. a + b + c
** 2
We restate this :
(22:l:a) Player 1 cannot reasonably maintain a claim to get under
all conditions more than the amount a = ~
2i
The same considerations may be repeated for players 2 and 3, and they
give:
(22:l:b) Player 2 cannot reasonably maintain a claim to get under
all conditions more than the amount ft = ^
i
(22:l:c) Player 3 cannot reasonably maintain a claim to get under
all conditions more than the amount 7 = 5
&
1 We assume, of course, that a player is not indifferent to any possible profit, however
small. This was implicit in our discussion of the zerosum twoperson game as well.
The traditional idea of the "homo oeconomicus," to the extent to which it is clearly
conceived at all, also contains this assumption.
FURTHER EXAMPLES 229
22.2.3. Now the criteria (22:l:a)(22:l:c) were only necessary ones, and
one could imagine a priori that further considerations could further lower
their upper bounds, a, 0, y or lead to some other restrictions of what
the players can aim for. This is not so, as the following simple con
sideration shows.
One verifies immediately that
a + = c, a + 7 = 6, + 7 = <*
In other words: If the players 1,2,3 do not aim at more than permitted by
(22:1 :a), (22:1 :b), (22:1 :c), i.e. than a, 0, y respectively, then any two
players who combine can actually obtain these amounts in a coalition.
Thus these claims are fully justified. Of course only two players the two
who form a coalition can actually obtain their "justified" dues. The
third player, who is excluded from the coalition, will not get a, ft, y respec
tively, but a, 6, c instead. 1
22.3. An Inequality. Formulae
22.3.1. At this point an obvious question presents itself: Any player
1,2,3 can get the amount a, 0, y respectively if he succeeds in entering a
coalition; if he does not succeed, he gets instead only a, b, c. This
makes sense only if a, 0, y are greater than the corresponding a, 6, c,
since otherwise a player might not want to enter a coalition at all, but
might find it more advantageous to play for himself. So the question is
whether the three differences
p a _ (__ a ) = a + a,
g = 0 (&) =0 + 6,
r = y ~ (<0 = y + c,
are all ^ 0.
It is immediately seen that they are all equal to each other. Indeed :
a + b + c
P = 9 = r =  ^ 
We denote this quantity by A/2. Then our question is whether
This inequality can be demonstrated as follows :
22.3.2. A coalition of the players 1,2 can obtain (from player 3) the
amount c and no more. If player 1 plays alone, then he can prevent players
2,3 from reducing him to a result worse than a since even a coalition of
players 2,3 can obtain (from player 1) the amount +a and no more; i.e.
player 1 can get the amount a for himself without any outside help.
Similarly, player 2 can get the amount 6 for himself without any outside
help. Consequently the two players 1,2 between them can get the amount
1 These are indeed the amounts which a coalition of the other players can wrest from
players 1,2,3 respectively. The coalition cannot take more.
230
ZEROSUM THREEPERSON GAMES
(a f 6) even if they fail to cooperate with each other. Since the maxi
mum they can obtain together under any conditions is c, this implies
c ^ a  b i.e. A = a + 6 + c0.
22.3.3. This proof suggests the following remarks:
First: We have based our argument on player 1. Owing to the sym
metry of the result A=a+6+c^O with respect to the three players,
the same inequality would have obtained if we had analyzed the situation
of player 2 or player 3. This indicates that there exists a certain symmetry
in the role of the three players.
Second: A = means c = a 6 or just as well a = a, and the two
corresponding pairs of equations which obtain by the cyclic permutation
of the three players. So in this case no coalition has a raison d'&tre: Any
two players can obtain, without cooperating, the same amount which they
can produce in perfect cooperation (e.g. for players 1 and 2 this amount is
a b = c). Also, after all is said and done, each player who succeeds
in joining a coalition gets no more than he could get for himself without
outside help (e.g. for player 1 this amount is a = a).
If, on the other hand, A > then every player has a definite interest in
joining a coalition. The advantage contained in this is the same for all
three players: A/2.
Here we have again an indication of the symmetry of certain aspects
of the situation for all players: A/2 is the inducement to seek a coalition; it
is the same for all players.
22.3.4. Our result can be expressed by the following table:
Player
i
2
3
Value of a play
With coalition
a
ft
7
Without coalition
a
b
c
Figure 49.
If we put
., L, 1 A a 1 A a26 + c
6' = 6 + A = /J g A g ,
/ . 1 A 1 A a + b  2c
c'=c + A = TgA = g ,
then we have
a' + V + c' = 0,
and we can express the above table equivalently in the following manner:
(22 :A) A play has for the players 1,2,3 the basic values a', 6', c' respec
tively. (This is a possible valuation, since the sum of these
THE GENERAL CASE 231
values is zero, cf. above). The play will, however, certainly be
attended by the formation of a coalition. Those two players
who form it get (beyond their basic values) a premium of A/6 and
the excluded player sustains a loss of A/3.
Thus the inducement to form a coalition is A/2 for each
player, and always A/2 ^ 0.
23. The General Case
23.1. Exhaustive Discussion. Inessential and Essential Games
23.1.1. We can now remove all restrictions.
Let T be a perfectly arbitrary zerosum threeperson game. A simple
consideration suffices to bring it within the reach of the analysis of 22.2.,
22.3. We argue as follows:
If two players, say 1 and 2, decide to cooperate completely postponing
temporarily, for a later settlement, the question of distribution, i.e. of the
compensations to be paid between partners then F becomes a zerosum
twoperson game. The two players in this new game are: the coalition 1,2
(which is now a composite player consisting of two "natural persons "),
and the player 3. Viewed in this manner T falls under the theory of the
zerosum twoperson game of Chapter III. Each play of this game has a
well defined value (we mean the v' defined in 17.4.2.). Let us denote by c
the value of a play for the coalition 1,2 (which in our present interpretation
is one of the players).
Similarly we can assume an absolute coalition between players 1,3 and
view T as a zerosum twoperson game between this coalition and the player
2. We then denote by b the value of a play for the coalition 1,3.
Finally we can assume an absolute coalition between players 2,3, and
view F as a zerosum twoperson game between this coalition and the
player 1. We then denote by a the value of a play for the coalition 2,3.
It ought to be understood that we do not yet! assume that any such
coalition will necessarily arise. The quantities a, 6, c are merely computa
tionally defined; we have formed them on the basis of the main (mathe
matical) theorem of 17.6. (For explicit expressions of a, 6, c cf. below.)
23.1.2. Now it is clear that the zerosum threeperson game F falls
entirely within the domain of validity of 22.2., 22.3.: a coalition of the
players 1,2 or 1,3 or 2,3 can obtain (from the excluded players 3 or 2 or 1)
the amounts c, 6, a respectively, and no more. Consequently all results of
22.2., 22.3. hold, in particular the one formulated at the end which describes
every player's situation with and without a coalition.
23.1.3. These results show that the zerosum threeperson game falls
into two quantitatively different categories, corresponding to the possi
bilities A = and A > 0. Indeed:
A = 0: We have seen that in this case coalitions have no raison d'etre,
and each player can get the same amount for himself, by playing a "lone
hand" against all others, as he could obtain by any coalition. In this case,
232 ZEROSUM THREEPERSON GAMES
and in this case alone, it is possible to assume a unique value of each play
for each player, the sum of these values being zero. These are the basic
values a', &', c' mentioned at the end of 22.3. In this case the formulae of
22.3. show that a! = a = a, V = ft = b, c 1 = y = c. We shall
call a game in this case, in which it is inessential to consider coalitions, an
inessential game.
A > 0: In this case there is a definite inducement to form coalitions, as
discussed at the end of 22.3. There is no need to repeat the description
given there; we only mention that now a > a' > a, ft > b' > 6,
7 > c' > c. We shall call a game in this case, in which coalitions are
essential, an essential game.
Our above classification, inessential and essential, applies at present
only to zerosum threeperson games. But we shall see subsequently that it
can be extended to all games and that it is a differentiation of central
importance.
23.2. Complete Formulae
23.2. Before we analyze this result any further, let us make a few purely
mathematical remarks about the quantities a, 6, c and the a, 0, 7, a', 6', c',
A based upon them in terms of which our solution was expressed.
Assume the zerosum threeperson game F in the normalized form of
11.2.3. There the players 1,2,3 choose the variables TI, r 2 , r 3 respectively
(each one uninformed about the two other choices) and get the amounts
3Ci(ri, T 2 , r 3 ), 3C2(ri, T 2 , r 3 ), 3C 3 (n, T 2 , T 3 ) respectively. Of course (the game is
zerosum) :
3Ci(n, T2 , T 8 ) + JC 2 (ri, T 2 , T 3 ) + 3C 3 (n, r 2 , r 3 ) s 0.
The domains of the variables are:
T! = 1, 2,    , 0,,
T, = 1, 2, , 02,
T 3 = 1, 2, , 3 .
Now in the twoperson game which arises between an absolute coalition of
players 1,2, and the player 3, we have the following situation:
The composite player 1,2 has the variables n, r 2 ; the other player 3
has the variable r 8 . The former gets the amount
3Cl(Tl, T2 , T 3 ) +3C 2 (n, T 2 , T 3 ) 3 3Cs(Ti, T 2 , T 3 ),
the latter the negative of this amount.
A mixed strategy of the composite player 1,2 is a vector of S^, the
components of which we may denote by { fiiT . 1 Thus the of S fti p t are
characterized by
* V r, 0, 2 *v, = 1
*!/*
1 The number of pairs n, TJ is, of course, 0ifa.
DISCUSSION OF AN OBJECTION 233
A mixed strategy of the player 3 is a vector 17 of Sp the components of
which we denote by j v The i\ of Sf t are characterized by
Hr, 0, *, = 1.
*\
The bilinear form K( , r? ) of (17:2) in 17.4.1. is therefore
K( , 1? ) = 2) iJCi(n, T 2 , T 3 ) + JC(n, T 2 , T,)  TI r^
and finally
> . >
c = Max* Min> K( , 77 ) = Min Max* K( , rj ).
f n n {
The expressions for 6, a obtain from this by cyclical permutations of the
players 1,2,3 in all details of this representation.
We repeat the formulae expressing a, /3, 7, a', 6', c f and A:
A = a + b + c necessarily ^ 0,
a + b + c , 2a + b + c
_ a + 6 c , a + 6 2c
and we have
A ^ 0,
a' + b' + c' = 0,
,'+,
24. Discussion of an Objection
24.1. The Case of Perfect Information and Its Significance
24.1.1. We have obtained a solution for the zerosum threeperson game
which accounts for all possibilities and which indicates the direction that the
search for the solutions of the nperson game must take: the analysis of all
possible coalitions, and the competitive relationship which they bear to
each other, which should determine the compensations that players who
want to form a coalition will pay to each other.
We have noticed already that this will be a much more difficult problem
forn ^ 4 players than it was for n = 3 (cf. footnote 2, p. 221).
234 ZEROSUM THREEPERSON GAMES
Before we attack this question, it is well to pause for a moment to
reconsider our position. In the discussions which follow we shall put the
main stress on the formation of coalitions and the compensations between
the participants in those coalitions, using the theory of the zerosum two
person game to determine the values of the ultimate coalitions which oppose
each other after all players have "taken sides" (cf. 25.1.1., 25.2.). But is
this aspect of the matter really as universal as we propose to claim?
We have adduced already some strong positive arguments for it, in our
discussion of the zerosum threeperson game. Our ability to build the
theory of the nperson game (for all ri) on this foundation will, in fine, be
the decisive positive argument. But there is a negative argument an
objection to be considered, which arises in connection with those games
where perfect information prevails.
The objection which we shall now discuss applies only to the above
mentioned special category of games. Thus it would not, if found valid,
provide us with an alternative theory that applies to all games. But since
we claim a general validity for our proposed stand, we must invalidate all
objections, even those which apply only to some special case. 1
24.1.2. Games with perfect information have already been discussed in
15. We saw there that they have important peculiarities and that their
nature can be understood fully only when they are considered in the extensive
form and not merely in the normalized one on which our discussion chiefly
relied (cf. also 14.8.).
The analysis of 15. began by considering nperson games (for all n),
but in its later parts we had to narrow it to the zerosum twoperson game.
At the end, in particular, we found a verbal method of discussing it (cf. 15.8.)
which had some remarkable features: First, while not entirely free from
objections, it seemed worth considering. Second, the argumentation used
was rather different from that by which we had resolved the general case of
the zerosum twoperson game and while applicable only to this special
case, it was more straight forward than the other argumentation. Third,
it led for the zerosum twoperson games with perfect information to
the same result as our general theory.
Now one might be tempted to use this argumentation for n ^ 3 players
too; indeed a superficial inspection of the pertinent paragraph 15.8.2. does
not immediately disclose any reason why it should be restricted (as there) to
n = 2 players (cf., however, 15.8.3.). But this procedure makes no men
tion of coalitions or understandings between players, etc.; so if it is usable
for n = 3 players, then our present approach is open to grave doubts. 2 We
1 In other words : in claiming general validity for a theory one necessarily assumes the
burden of proof against all objectors.
2 One might hope to evade this issue, by expecting to find A = for all zerosum three
person games with perfect information. This would make coalitions unnecessary. Cf.
the end of 23.1.
Just as games with perfect information avoided the difficulties of the theory of zero
sum twoperson games by being strictly determined (cf. 15.6.1.), they would now avoid
those of the zeroBum threeperson gamee by being inessential.
DISCUSSION OF AN OBJECTION 235
propose to show therefore why the procedure of 15.8. is inconclusive when
the number of players is three or more.
To do this, let us repeat some characteristic steps of the argumentation
in question (cf. 15.8.2., the notations of which we are also using).
24.2. Detailed Discussion. Necessity of Compensations between Three or More Players
24.2.1. Consider accordingly a game F in which perfect information
prevails. Let SfTCi, 3fE 2 , , 9(11, be its moves, <TI, (7 2 , , <r, the choices
connected with these moves, T(<TI, , 0,) the play characterized by
these choices, and 3v(7r(oi, , <r,)) the outcome of this play for the player
j(= 1, 2,   ,n).
Assume that the moves 3TCi, 3Tl2, , 3TC,i have already been made,
the outcome of their choices being <TI, <r 2 , * , cr,_i and consider the last
move 3TC, and its <r,. If this is a chance move i.e. ft, (en, , <r,~\) = 0,
then the various possible values a, = 1,2, , a,(cri, , a,i) have the
probabilities p ( 1 ), p v (2), , p,(a,(oi, , cr,_i)), respectively. If this
is a personal move of player A: i.e. fc,(ai, ,<r,i) = k = 1,2, , n,
then player k will choose a f so as to make SFjb(ir(ori, , 0v A tr,)) a maxi
mum. Denote this <r, by <r,(<n, , *,_i). Thus one can argue that the
value of the play is already known (for each player j = 1, n) after
the moves Sflfcj, 3Tl2, * * * , 2(TC,i (and before 9ftl,!), i.e. as a function of
0*1, 0% , <r,i alone. Indeed: by the above it is
*!
for fc,(<7i, * , <r,_i) = 0,
where <r, = <r,(<n, , <r,i) maximizes
S^Or^i, , <r,i, <r,)) for
Consequently we can treat the game T as if it consisted of the moves
9Tli, 9fn 2 , , 3Tl,_i only (without 3TC,).
By this device we have removed the last move 9TC,. Repeating it, we
can similarly remove successively the moves 3fTC,_i, 9fH,_a, , 9H a , 9Tli and
finally obtain a definite value of the play (for each player j = 1, 2, n).
24.2.2. For a critical appraisal of this procedure consider the last two
steps 9Tl,_i, 9fll, and assume that they are personal moves of two different
This, however, is not the case. To see that, it suffices to modify the rules of the
simple majority game (cf. 21.1.) as follows: Let the players 1,2,3 make their personal
moves (i.e. the choices of n, TJ, r* respectively, cf. loc. cit.) in this order, each one being
informed about the anterior moves. It is easy to verify that the values c, 6, a of tho
three coalitions 1,2, 1,3, 2,3 are the same as before
c6al, Aa+6+c3>0.
A detailed discussion of this game, with particular respect to the considerations of
21.2., would be of a certain interest, but we do not propose to continue this subject
further at present.
236 ZEROSUM THREEPERSON GAMES
players, say 1,2 respectively. In this situation we have assumed that
player 2 will certainly choose cr, so as to maximize $2(01, , o,_i, <r y ).
This gives a a> = 0,(<ri, , <r,_i). Now we have also assumed that
player 1, in choosing <r,_i can rely on this; i.e. that he may safely replace the
$I(<TI, , ovi, 0,), (which is what he will really obtain), by
$1(01, , (r r _i, OV(<TI, , cr,_i)) and maximize this latter quantity. 1
But can he rely on this assumption?
To begin with, <r y (cri, , <r,,_i) may not even be uniquely determined:
$2(^1, , cr,_i, (T,) may assume its maximum (for given <TI, , o>_i)
at several places ov In the zerosum twoperson game this was irrelevant:
there $1 = #2, hence two o> which give the same value to $F 2 , also give the
same value to 9Fi. 2 But even in the zerosum threeperson game, $2 does not
determine $1, due to the existence of the third player and his $3! So it
happens here for the first time that a difference which is unimportant for
one player may be significant for another player. This was impossible in
the zerosum twoperson game, where each player won (precisely) what the
other lost.
What then must player 1 expect if two v v are of the same importance for
player 2, but not for player 1 ? One must expect that he will try to induce
player 2 to choose the a, which is more favorable to him. He could offer
to pay to player 2 any amount up to the difference this makes for him.
This being conceded, one must envisage that player 1 may even try to
induce player 2 to choose a a v which does not maximize S^i, * ' * , ovi, ov).
As long as this change causes player 2 less of a loss than it causes player 1
a gain, 3 player 1 can compensate player 2 for his loss, and possibly even
give up to him some part of his profit.
24.2.3. But if player 1 can offer this to player 2, then he must also count
on similar offers coming from player 3 to player 2. I.e. there is no certainty
at all that player 2 will, by his choice of o,, maximize 2(01, , <r,_i, cr v ).
In comparing two a v one must consider whether player 2 J s loss is over
compensated by player Ts or player 3's gain, since this could lead to under
standings and compensations. I.e. one must analyze whether a coalition 1,2
or 2,3 would gain by any modification of a v .
24.2.4. This brings the coalitions back into the picture. A closer analysis
would lead us to the considerations and results of 22.2., 22.3., 23. in every
detail. But it does not seem necessary to carry this out here in complete
detail: after all, this is just a special case, and the discussion of 22.2., 22.3.,
23. was of absolutely general validity (for the zerosum threeperson game)
1 Since this is a function of <TI, , a>_ 2 , <r,_i only, of which <TI, , o>_2 are known
at 9Rriy and <r,,_i is controlled by player 1, he is able to maximize it.
He cannot in any sense maximize $1(01, , 0>i, o v ) since that also depends on <r,
which he neither knows nor controls.
2 Indeed, we refrained in 15.8.2. from mentioning $ 2 at all: instead of maximizing $ 2 ,
we talked of minimizing $1. There was no need even to introduce 9 (a\ t , ri)
and everything was described by Max and Min operations on JFi.
3 I.e. when it happens at the expense of player 3.
DISCUSSION OF AN OBJECTION 237
provided that the consideration of understandings and compensations, i.e. of
coalitions, is permitted.
We wanted to show that the weakness of the argument of 15.8.2., already
recognized in 15.8.3., becomes destructive exactly when we go beyond the
zerosum twoperson games, and that it leads precisely to the mechanism of
coalitions etc. foreseen in the earlier paragraphs of this chapter. This
should be clear from the above analysis, and so we can return to our original
method in dealing with zerosum threeperson games, i.e. claim full validity
for the results of 22.2., 22.3., 23.
CHAPTER VI
FORMULATION OF THE GENERAL THEORY:
ZEROSUM nPERSON GAMES
25. The Characteristic Function
25.1. Motivation and Definition
25.1.1. We now turn to the zerosum nperson game for general n. The
experience gained in Chapter V concerning the case n = 3 suggests that
the possibilities of coalitions between players will play a decisive role in the
general theory which we are developing. It is therefore important to
evolve a mathematical tool which expresses these " possibilities" in a
quantitative way.
Since we have an exact concept of " value" (of a play) for the zerosum
twoperson game, we can also attribute a " value" to any given group of
players, provided that it is opposed by the coalition of all the other players.
We shall give these rather heuristic indications an exact meaning in what
follows. The important thing is, at any rate, that we shall thus reach a
mathematical concept on which one can try to base a general theory and
that the attempt will, in fine, prove successful.
Let us now state the exact mathematical definitions which carry out this
program.
25.1.2. Suppose then that we have a game F of n players who, for the
sake of brevity, will be denoted by 1, 2, , n. It is convenient to
introduce the set / = (1, 2, , n) of all these players. Without yet
making any predictions or assumptions about the course a play of this game
is likely to take, we observe this: if we group the players into two parties,
and treat each party as an absolute coalition i.e. if we assume full coopera
tion within each party then a zerosum twoperson game results. 1 Pre
cisely: Let S be any given subset of 7, S its complement in /. We
consider the zerosum twoperson game which results when all players k
belonging to S cooperate with each other on the one hand, and all players k
belonging to S cooperate with each other on the other hand.
Viewed in this manner F falls under the theory of the zerosum two
person game of Chapter III. Each play of this game has a well defined
value (we mean the v' defined in 17.8.1.). Let us denote by v(S) the value
of a play for the coalition of all k belonging to S (which, in our present inter
pretation, is one of the players).
1 This is exactly what we did in the case n = 3 in 23.1.1. The general possibility was
already alluded to at the beginning of 24.1.
238
THE CHARACTERISTIC FUNCTION 239
Mathematical expressions for v(S) obtain as follows: 1
25.1.3. Assume the zerosum nperson game F in the normalized form of
11.2.3. There each player k = 1, 2, , n chooses a variable r* (each one
uninformed about the n 1 other choices) and gets the amount
3C*(Tl, T 2 , , T n ).
Of course (the game is zerosum) :
(25:1) 3e*(n, , r n ) m 0.
*i
The domains of the variables are:
T* = 1, , j8* for k = 1, 2,  , n.
Now in the twoperson game which arises between an absolute coalition of all
players k belonging to S (player 1') and that one of all players k belonging
to S (player 2'), we have the following situation:
The composite player 1' has the aggregate of variables T* where k runs
over all elements of S. It is necessary to treat this aggregate as one variable
and we shall therefore designate it by one symbol r a . The composite
player 2' has the aggregate of variables r k where k runs over all elements
of S. This aggregate too is one variable, which we designate by the
symbol r~ a . The player 1' gets the amount
(25:2) 5c(rV s ) = OC*(n,  , r n ) =  3C(n,  , r n );'
kinS km S
the player 2' gets the negative of this amount.
A mixed strategy of the player I/ is a vector of <S0, 8 the components
of which we denote by r . Thus the of S0a are characterized by
t ^ o, 2) s* a = L
T
A mixed strategy of the player 2' is a vector rj of /V, 4 the components
of which we denote by rj T a. Thus the ij of Spa are characterized by
rj r 8 ^0, 5J rj T a = 1.
r~
1 This is a repetition of the construction of 23.2., which applied only to the special
case n 3.
1 The T S , r~ 5 of the first expression form together the aggregate of the n, , r n of
the two other expressions; so T S , r~ 3 determine those n, , r n .
The equality of the two last expressions is, of course, only a restatement of the
zero sum property.
8 ft 3 is the number of possible aggregates T S , i.e. the product of all 0* where k runs over
all elements of S.
4 0~ s is the number of possible aggregates r" 5 , i.e. the product of all 0* where k runs
over all elements of <S.
240 GENERAL THEORY: ZEROSUM nPERSONS
> >
The bilinear form K( , t? ) of (17:2) in 17.4.1. is therefore
K(7,7)  #(rV*)r'*',
rV s
and finally
v(S) = Max Min K(, ) = Min Max^ K(, ).
25.2. Discussion of the Concept
25.2.1. The above function v(S) is defined for all subsets 8 of I and has
real numbers as values. Thus it is, in the sense of 13.1.3., a numerical
set function. We call it the characteristic function of the game T. As we
have repeatedly indicated, we expect to base the entire theory of the
zerosum nperson game on this function.
It is well to visualize what this claim involves. We propose to determine
everything that can be said about coalitions between players, compensa
tions between partners in every coalition, mergers or fights between coali
tions, etc., in terms of the characteristic function v(*S) alone. Prima facie,
this program may seem unreasonable, particularly in view of these two facts:
(a) An altogether fictitious twoperson game, which is related to the
real nperson game only by a theoretical construction, was used to define
v(S). Thus v(S) is based on a hypothetical situation, and not strictly
on the nperson game itself.
(b) v(S) describes what a given coalition of players (specifically,
the set S) can obtain from their opponents (the set AS) but it fails to
describe how the proceeds of the enterprise are to be divided among the
partners k belonging to S. This division, the "imputation," is indeed
directly determined by the individual functions 3C*(Ti, , r n ), k belong
ing to S, while v(S) depends on much less. Indeed, v(S) is determined by
their partial sum JC(r 5 , r~ 5 ) alone, and even by less than that since it is
the saddle value of the bilinear form K( , y ) based on 5C(r 5 , r~" s ) (cf. the
formulae of 25.1.3.).
25.2.2. In spite of these considerations we expect to find that the
characteristic function v(S) determines everything, including the "impu
tation" (cf. (b) above). The analysis of the zerosum threeperson game in
Chapter V indicates that the direct distribution (i.e., "imputation")
by means of the 3G*(ri, , r n ) is necessarily offset by some system of
"compensations" which the players must make to each other before coali
tions can be formed. The "compensations" should be determined essen
tially by the possibilities which exist for each partner in the coalition S
(i.e. for each k belonging to *S), to forsake it and to join some other coalition
T. (One may have to consider also the influence of possible simultaneous
and concerted desertions by sets of several partners in S etc.) I.e. the
"imputation" of v(/S) to the players k belonging to S should be determined
THE CHARACTERISTIC FUNCTION 241
by the other v(T) 1 and not by the 3C*(ri, , r n ). We have demon
strated this for the zerosum threeperson game in Chapter V. One of the
main objectives of the theory we are trying to build up is to establish the
same thing for the general nperson game.
26.3. Fundamental Properties
25.3.1. Before we undertake to elucidate the importance of the char
acteristic function v(S) for the general theory of games, we shall investigate
this function as a mathematical entity in itself. We know that it is a
numerical set function, defined for all subsets S of / = (1, 2,  , n)
and we now propose to determine its essential properties.
It will turn out that they are the following:
(25:3:a) v(0) = 0,
(25:3:b) v(S) = v(S),
(25:3:c) v(S u T) ^ v(S) + v(T), if S n T = 0.
We prove first that the characteristic set function v(>S) of every game
fulfills (25:3:a)(25:3:c).
25.3.2. The simplest proof is a conceptual one, which can be carried out
with practically no mathematical formulae. However, since we gave
exact mathematical expressions for v(S) in 25.1.3., one might desire a
strictly mathematical, formalistic proof in terms of the operations Max
and Min and the appropriate vectorial variables. We emphasize therefore
that our conceptual proof is strictly equivalent to the desired formalistic,
mathematical one, and that the translation can be carried out without
any real difficulty. But since the conceptual proof makes the essential
ideas clearer, and in a briefer and simpler way, while the formalistic proof
would involve a certain amount of cumbersome notations, we prefer to
give the former. The reader who is interested may find it a good exercise
to construct the formalistic proof by translating our conceptual one.
25.3.3. Proof of (25:3:a) : 2 The coalition has no members, so it always
gets the amount zero, therefore v(0) = 0.
Proof of (25:3:b): v(*S) and v(S) originate from the same (fictitious)
zerosum twoperson game, the one played by the coalition S against
1 All this is very much in the sense of the remarks in 4.3.3. on the role of " virtual"
existence.
1 Observe that we are treating even the empty set as a coalition. The reader should
think this over carefully. In spite of its strange appearance, the step is harmless and
quite in the spirit of general set theory. Indeed, it would be technically quite a nuisance
to exclude the empty set from consideration.
Of course this empty coalition has no moves, no variables, no influence, no gains,
and no losses. But this is immaterial.
The complementary set of , the set of all players 7, will also be treated as a possible
coalition. This too is the convenient procedure from the settheoretical point of view.
To a lesser extent this coalition also may appear to be strange, since it has no opponents.
Although it has an abundance of members and hence of moves and variables it will
(in a zerosum game) equally have nothing to influence, and no gains or losses. But this
too is immaterial.
242 GENERAL THEORY: ZEROSUM nPERSONS
the coalition S. The value of a play of this game for its two composite
players is indeed v(S) and v( S) respectively. Therefore v( S) = v(S).
Proof of (25:3:c): The coalition S can obtain from its opponents (by
using an appropriate mixed strategy) the amount v(S) and no more. The
coalition T can obtain similarly the amount v(7 7 ) and no more. Hence the
coalition S u T can obtain from its opponents the amount v(S) + v(T),
even if the subcoalitions S and T fail to cooperate with each other. 1 Since
the maximum which the coalition S u T can obtain under any condition is
v(S u T) this implies v(S u T) ^ v(S) + v(2T). 2
25.4. Immediate Mathematical Consequences
25.4.1. Before we go further let us draw some conclusions from the above
(25:3:a)(25:3:c). These will be derived in the sense that they hold for
any numerical set function v(S) which fulfills (25:3:a)(25:3:c) irrespective
of whether or not it is the characteristic function of a zerosum nperson
game F.
(25:4) v(7) = 0.
Proof:* By (25:3:a), (25:3:b), v(7) = v(0) = v(0) = 0.
(25:5) v(5i u u S p ) v(5i) +  + v(S 9 )
if Si, , S p are pairwise disjunct subsets of 7.
Proof: Immediately by repeated application of (25:3:c).
(25:6) v(Si) + + v(S,) g
if Si, , S p are a decomposition of 7, i.e. pairwise disjunct
subsets of 7 with the sum 7.
Proof: We have Si u u S p = 7, hence v(Si u    u S p ) = by
(25:4). Therefore (25:6) follows from (25:5).
25.4.2. While (25 :4)(25 :6) are consequences of (25 :3 :a)(25 : 3 : c) , they
and even somewhat less can replace (25:3:a)(25:3:c) equivalently.
Precisely:
(25 :A) The conditions (25:3:a)(25:3:c) are equivalent to the asser
tion of (25:6) for the values p = 1, 2, 3 only; but (25:6) must
then be stated for p = 1, 2 with an = sign, and for p = 3 with
a sign.
1 Observe that we are now using S n T 0. If S and T had common elements, we
could not break up the coalition S U T into the subcoalitions S and T.
1 This proof is very nearly a repetition of the proof of a + b 4 c ^ in 22.3.2. One
could even deduce our (25:3:c) from that relation: Consider the decomposition of / into
the three disjunct subsets S, T, (S U T). Treat the three corresponding (hypothetical)
absolute coalitions as the three players of the zerosum threeperson game into which this
transforms T. Then v(), v(T), v(S U T) correspond to the a, b, c loc. cit.; hence
a + b + c means v(S)  v(T) 4 v(S U T) 0; i.e. v(S U T) v(/S) + v(T).
For a v(S) originating from a game, both (25:3:a) and (25:4) are conceptually
contained in the remark of footnote 2 on p. 241.
GIVEN CHARACTERISTIC FUNCTION 243
Proof: (25:6) for p = 2 with an = sign states v(S) + v(~S) =
(we write S for S\, hence Si is S); i.e. v( S) = v(S) which is exactly
(25:3:b).
(25:6) for p = 1 with an = sign states v(/) = (in this case Si must be
/) which is exactly (25:4). Owing to (25:3:b), this is exactly the same
as (25:3:a). (Cf. the above proof of (25:4).)
(25:6) for p = 3 with an g sign states v(S) + v(T) + v( (S u T)) ^
(we write S, T for Si, S 2 ; hence SB is  (S u T)), i.e.
v(OSuT)) vOS) +v(T).
By (25:3:b) this becomes v(S u T) ^ v(S) + v(T) which is exactly (25:3:c).
So our assertions are equivalent precisely to the conjunction of (25:3:a)
(25:3:c).
26. Construction of a Game with a Given Characteristic Function
26.1. The Construction
26.1.1. We now prove the converse of 25.3. 1. : That for any numerical set
function v(S) which fulfills the conditions (25:3:a)(25:3:c) there exists a
zerosum nperson game F of which this v(S) is the characteristic function.
In order to avoid confusion it is better to denote the given numerical
set function which fulfills (25:3:a)(25:3:c) by v (S). We shall define
with its help a certain zerosum nperson game F, and denote the char
acteristic function of this F by v(/S). It will then be necessary to prove that
Let therefore a numerical set function v (S) which fulfills (25:3:a)
(25:3:c) be given. We define the zerosum nperson game F as follows: 1
Each player k = 1, 2, , n will, by a personal move, choose a subset
Sk of / which contains k. Each one makes his choice independently of the
choice of the other players. 2
After this the payments to be made are determined as follows:
Any set S of players, for which
(26:1) S k = S for every k belonging to S
is called a ring*** Any two rings with a common element are identical. 6
1 This game F is essentially a more general analogue of the simple majority game of
three persons, defined in 21. 1. We shall accompany the text which follows with footnotes
pointing out the details of this analogy.
2 The welement set 7 has 2 n ~ l subsets S containing k, which we can enumerate by an
index r*(/S) 1, 2, , 2*~ l . If we now let the player A; choose, instead of *, its
index r* Tk(Sk) 1, 2, , 2 n ~ l , then the game is already in the normalized form of
11.2.3. Clearly all fa  2*' 1 .
8 The rings are the analogues of the couples in 21.1. The contents of footnote 3 on
p. 222 apply accordingly; in particular the rings are the formal concept in the set of rules
of the game which induces the coalitions which influence the actual course of each play.
4 Verbally: A ring is a set of players, in which every one has chosen just this set.
The analogy with the definition of a couple in 21. 1. is clear. The differences are due
to formal convenience: in 21.1. we made each player designate the other element of the
couple which he desires; now we expect him to indicate the entire ring. A closer analysis
of this divergence would be easy enough, but it does not seem necessary.
6 Proof: Let S and T be two rings with a common element k\ then by (26:1) Sk = S
and Si,  T, and so S  T.
244 GENERAL THEORY: ZEROSUM nPERSONS
In other words: The totality of all rings (which have actually formed in a
play) is a system of pairwise disjunct subsets of /.
Each player who is contained in none of the rings thus defined forms by
himself a (oneelement) set which is called a solo set. Thus the totality
of all rings and solo sets (which have actually formed in a play) is a decompo
sition of /; i.e. a system of pairwise disjunct subsets of I with the sum I.
Denote these sets by Ci, , C p and the respective numbers of their
elements by n\, , n p .
Consider now a player k. He belongs to precisely one of these sets
Ci, , C p say to C q . Then player k gets the amount
(26:2) _ v .(C.) v (C r ). 1
n q n 4
rl
This completes the description of the game T. We shall now show that
this F is a zerosum nperson game and that it has the desired characteristic
function v ().
26.1.2. Proof of the zerosum character: Consider one of the sets C q .
Each one of the n q players belonging to it gets the same amount, stated in
(26:2). Hence the players of C q together get the amount
(26:3)
rl
In order to obtain the total amount which all players 1, , n get, we
must sum the expression (26:3) over all sets C q , i.e. over all q = 1, , p.
This sum is clearly
 v,(C r ),
Ql rl
i.e. zero. 2
Proof that the characteristic function is v (S) : Denote the characteristic
function of T by v(S). Remember that (25:3:a)(25:3:c) hold for v(S)
because it is a characteristic function, and for v (S) by hypothesis. Conse
quently (25:4)(25:6) also hold for both v(S) and v (/S).
We prove first that
(26:4) v(iS) ^ v (S) for all subsets 8 of /.
If S is empty, then both sides are zero by (25:3:a). So we may assume that
S is not empty. In this case a coalition of all players k belonging to S can
1 The course of the play, that is the choices Si, , S n or, in the sense of footnote 2
on p. 243, the choices n, , r n determine the Ci, ; C p , and thus the expression
(26:2). Of course (26:2) is the JC*(n, , r n ) of the general theory.
p
2 Obviously 2^ n q n.
INESSENTIAL AND ESSENTIAL GAMES 245
govern the choices of its Sh so as with certainty to make S a ring. It suf
fices for every k in S to choose his Sk = S. Whatever the other players
(in S) do, S will thus be one of the sets (rings or solo sets) Ci, , C p ,
say C g . Thus each fc in C q = S gets the amount (26:2); hence the entire
coalition S gets the amount (26:3). Now we know that the system
p
is a decomposition of /; hence by (25:6) ] v (C r ) ^ 0. That is, the
rl
expression (26:3) is ^ v (C fl ) = VoOS). 1 In other words, the players belong
ing to the coalition S can secure for themselves at least the amount v (5)
irrespective of what the players in S do. This means that v(S) ^ v (S);
i.e. (26:4).
Now we can establish the desired formula
(26:5) v(S)
Apply (26:4) to S. Owing to (25:3:b) this means v(S) ^ v (S), i.e.
(26:6) v(S) g VoOS).
(26:4), (26:6) give together (26:5). 2
26.2. Summary
26.2. To sum up: in paragraphs 25.3.26.1. we have obtained a com
plete mathematical characterization of the characteristic functions v(S) of
all possible zerosum nperson games F. If the surmise which we expressed
in 25.2.1. proves to be true, i.e. if we shall be able to base the entire theory
of the game on the global properties of the coalitions as expressed by v(S),
then our characterization of v(/S) has revealed the exact mathematical
substratum of the theory. Thus the characterization of v(S) and the func
tional relations (25:3:a)(25:3:c) are of fundamental importance.
We shall therefore undertake a first mathematical analysis of the mean
ing and of the immediate properties of these relations. We call the func
tions which satisfy them characteristic functions even when they are viewed
in themselves, without reference to any game.
27. Strategic Equivalence. Inessential and Essential Games
27.1. Strategic Equivalence. The Reduced Form
27.1.1. Consider a zerosum nperson game T with the characteristic
function v(S). Let also a system of numbers aj, , 2 be given. We
1 Observe that the expression (26:3), i.e. the total amount obtained by the coalition S,
is not determined by the choices of the players in S alone. But we derived for it a lower
bound v (), which is determined.
2 Observe that in our discussion of the good strategies of the (fictitious) twoperson
game between the coalitions S and S (our above proof really amounted to that), we
considered only pure strategies, and no mixed ones. In other words, all these twoperson
games happened to be strictly determined.
This, however, is irrelevant for the end which we are now pursuing.
246 GENERAL THEORY: ZEROSUM nPERSONS
now form a new game F' which agrees with F in all details except for this:
F' is played in exactly the same way as F, but when all is over, player k gets
in F' the amount which he would have got in F (after the same play), plus a.
(Observe that the aj, , 2 are absolute constants!) Thus if F is
brought into the normalized form of 11.2.3. with the functions
then F' is also in this normalized form, with the corresponding functions
3C*( r i> * ' ' > r n) 3C*(ri,   , r n ) + ajj. Clearly F' will be a zerosum
nperson game (along with F) if and only if
(27:1) i J = 0,
fc1
which we assume.
Denote the characteristic function of F' by v'(S), then clearly
(27:2) v'(S)
kinS
Now it is apparent that the strategic possibilities of the two games F and F'
are exactly the same. The only difference between these two games con
sists of the fixed payments J after each play. And these payments are
absolutely fixed; nothing that any or all of the players can do will modify
them. One could also say that the position of each player has been shifted
by a fixed amount, but that the strategic possibilities, the inducements and
possibilities to form coalitions etc., are entirely unaffected. In other words:
If two characteristic functions v(S) and v'(S) are related to each other by
(27 :2) 2 , then every game with the characteristic function v(S) is fully equiva
lent from all strategic points of view to some game with the characteristic
function v'(S), and conversely. I.e. v(S) and v'(S) describe two strategi
cally equivalent families of games. In this sense v(S) and v'(S) may them
selves be considered equivalent.
Observe that all this is independent of the surmise restated in 26.2.,
according to which all games with the same v(S) have the same strategic
characteristics.
27.1.2. The transformation (27:2) (we need pay no attention to (27:1),
cf. footnote 2 above) replaces, as we have seen, the set functions v(/S) by a
1 The truth of this relation becomes apparent if one recalls how v(S), v'(S) were
defined with the help of the coalition S. It is also easy to prove (27:2) formalistically
with the help of the3C*(n, , r n ), 3Ci(n, , r n ).
s Under these conditions (27:1) follows and need not be postulated separately.
Indeed, by (25:4) in 25.4.1., v(7)  v'(7)  0, hence (27:2) gives
n
2) <4  0; i.e. a  0.
klal kl
INESSENTIAL AND ESSENTIAL GAMES 247
strategically fully equivalent setfunction v'(S). We therefore call this
relationship strategic equivalence.
We now turn to a mathematical property of this concept of strategic
equivalence of characteristic functions.
It is desirable to pick from each family of characteristic functions v(S)
in strategic equivalence a particularly simple representative v(/S). The
idea is that, given v(S), this representative v(S) should be easy to determine,
and that on the other hand two v(S) and v'(S) would be in strategic equiva
lence if and only if their representatives v(*S) and v'(*S) are identical.
Besides, we may try to choose these representatives v(S) in such a fashion
that their analysis is simpler than that of the original v(S).
27.1.3. When we started from characteristic functions v(S) and v'(S),
then the concept of strategic equivalence could be based upon (27:2) alone;
(27:1) ensued (cf. footnote 2, p. 246). However, we propose to start now
from one characteristic function v(S) alone, and to survey all possible v'(S)
which are in strategic equivalence with it in order to choose the representa
tive v(/S) from among them. Therefore the question arises which systems
aj, , aj we may use, i.e. for which of these systems (using (27:2)) the
fact that v(S) is a characteristic function entails the same for v'(S). The
answer is immediate, both by what we have said so far, and by direct
verification: The condition (27:1) is necessary and sufficient. 1
Thus we have the n indeterminate quantities aj, , a at our
disposal in the search for a representative v(S); but the S> ' ' * > a n ar
subject to one restriction: (27:1). So we have n 1 free parameters at
our disposal.
27.1.4. We may therefore expect that we can subject the desired repre
sentative v(S) to n 1 requirements. As such we choose the equations
(27:3) v((l)) = v((2)) = . = v((")). 2
I.e. we require that every oneman coalition every player left to himself
should have the same value.
We may substitute (27:2) into (27:3) and state this together with (27:1),
and so formulate all our requirements concerning the aj, , a. So
we obtain:
(27:1*) al = 0,
ti
(27:2*)
It is easy to verify that these equations are solved by precisely one system of
'
1 This detailed discussion may seem pedantic. We gave it only to make clear that
when we start with two characteristic functions v(S) and v'(S) then (27:1) is superfluous,
but when we start with one characteristic function only, then (27:1) is needed.
1 Observe that these are n 1 and not n equations.
248 GENERAL THEORY: ZEROSUM nPERSONS
(27:4) a{
So we can say:
(27: A) We call a characteristic function v(S) reduced if and only if it
satisfies (27:3). Then every characteristic function v(S) is in
strategic equivalence with precisely one reduced v(S). This
v(S) is given by the formulae (27:2) and (27:4), and we call it the
reduced form of v(S).
The reduced functions will be the representatives for which we have
been looking.
27.2. Inequalities. The Quantity r
27.2. Let us consider a reduced characteristic function v(S). We
denote the joint value of the n terms in (27:3) by 7, i.e.
(27:5)  7
We can state (27:5) also this way:
(27:5*) v(S) = 7 for every oneelement set S.
Combination with (25:3:b) in 25.3.1. transforms (27:5*) into
(27:5**) v(S) = 7 for every (n  l)element set S.
We reemphasize that any one of (27:5), (27:5*), (27:5**) is besides
defining 7 just a restatement of (27:3), i.e. a characterization of the reduced
nature of v(S).
Now apply (25:6) in 25.4.1. to the oneelement sets Si = (1), ,
<S = (n). (So p = n). Then (27:5) gives ny ^ 0, i.e.:
(27:6) 7^0.
Consider next an arbitrary subset S of /. Let p be the number of its
elements: S = (fti, , k p ). Now apply (25:5) in 25.4.1. to the one
element sets Si = (fci), , S p = (k p ). Then (27:5) gives
Apply this also to S which has n p elements. Owing to (25:3:b) in
25.3.1., the above inequality now becomes
*(fl) 2S (n  p)75 i.e. v(/S) S (n  p)y.
1 Proof: Denote the joint value of the n terms in (27:2*) by 0. Then (27:2*) amounts
to al  v((fc)) + 0, and so (27:1*) becomes
n n
n0  % v((*))  0; i.e. 
fci *i
INESSENTIAL AND ESSENTIAL GAMES 249
Combining these two inequalities gives:
(27:7) py ^ v(/S) g (n p)y for every pelement set S.
(27:5*) and v(0) = (i.e. (25:3:a) in 25.3.1.) can also be formulated
this way:
(27:7*) For p = 0, 1 we have = in the first relation of (27:7).
(27:5**) and v(7) = (i.e. (25:4) in 25.4.1.) can also be formulated
this way:
(27:7**) For p = n 1, n we have = in the second relation of (27:7).
27.3. Inessentiality and Essentiality
27.3.1. In analyzing these inequalities it is best now to distinguish two
alternatives.
This distinction is based on (27 :6) :
First case: 7 = 0. Then (27:7) gives v(S) =0 for all S. This is a
perfectly trivial case, in which the game is manifestly devoid of further
possibilities. There is no occasion for any strategy of coalitions, no element
of struggle or competition: each player may play a lone hand, since there
is no advantage in any coalition. Indeed, every player can get the amount
zero for himself irrespective of what the others are doing. And in no
coalition can all its members together get more than zero. Thus the value
of a play of this game is zero for every player, in an absolutely unequivocal
way.
If a general characteristic function v(S) is in strategic equivalence
with such a v(S) i.e. if its reduced form is v(S) = then we have the
same conditions, only shifted by a for the player k. A play of a game F
with this characteristic function v(S) has unequivocally the value ajj for
the player k: he can get this amount even alone, irrespective of what the
others are doing. No coalition could do better in toto.
We call a game F, the characteristic function v(S) of which has such a
reduced form v(S) = 0, inessential. 1
27.3.2. Second case :y > 0. By a change in unit 2 we could make y = I. 8
This obviously affects none of the strategically significant aspects of the
game, and it is occasionally quite convenient to do. At this moment, how
ever, we do not propose to do this.
In the present case, at any rate, the players will have good reasons to
want to form coalitions. Any player who is left to himself loses the amount
y (i.e. he gets 7, cf. (27:5*) or (27:7*)), while any n  1 players who
1 That this coincides with the meaning given to the word inessential in 23.1.3. (in the
special case of a zerosum threeperson game) will be seen at the end of 27.4.1.
2 Since payments are made, we mean the monetary unit. In a wider sense it might
be the unit of utility. Cf. 2.1.1,
3 This would not have been possible in the first case, where 7 0.
250 GENERAL THEORY: ZEROSUM nPERSONS
cooperate win together the amount 7 (i.e. their coalition gets 7, cf. (27:5**)
or (27:?**)). 1
Hence an appropriate strategy of coalitions is now of great importance.
We call a game T essential when its characteristic function v(S) has a
reduced form v(S) not = O. 2
27.4. Various Criteria. Nonadditive Utilities
27.4.1. Given a characteristic function v(S), we wish to have an explicit
expression for the 7 of its reduced form v(S). (Cf. above.)
Now 7 is the joint value of the v((fc)), i.e. of the v((fc)) + aj, and this
is by (27:4) i V v((j)). Hence
ji
(27:8) 7 =  i
yi
Consequently we have:
(27 :B) The game F is inessential if and only if
t v((j)) = (i.e. 7 = 0),
31
and it is essential if and only if
S v((/)) < (i.e. 7 > 0).*
;l
For a zerosum threeperson game we have, with the notations of 23.1.,
v((l)) = a, v((2)) = 6, v((3)) = c; so 7 = iA. Therefore our con
cepts of essential and inessential specialize to those of 23.1.3. in the case
of a zerosum threeperson game. Considering the interpretation of these
concepts in both cases, this was to be expected.
1 This is, of course, not the whole story There may be other coalitions of > 1
but < n 1 players which are worth aspiring to. (If this is to happen, n 1 must
exceed 1 by more than 1, i.e. n ^ 4.) This depends upon the ^(S) of the sets S with
> 1 but < n 1 elements. But only a complete and detailed theory of games can
appraise the role of these coalitions correctly.
Our above comparison of isolated players and n 1 player coalitions (the biggest
coalitions which have anybody to oppose!) suffices only for our present purpose: to
establish the importance of coalitions in this situation.
1 Cf . again footnote 1 on p. 249.
8 So 7 is the of footnote 1 on p. 248.
n
4 We have seen already that one or the other must be the case, since 2} v ((/)) ^ as
;'i
well as 7 0.
INESSENTIAL AND ESSENTIAL GAMES 251
27.4.2. We can formulate, some other criteria of inessentiality:
(27 :C) The game F is inessential if and only if its characteristic function
v(S) can be given this form:
v(S) 2) aj
tin 5
for a suitable system a?, , aJJ.
Proo/: Indeed, this expresses by (27:2) precisely that v(S) is in strategic
equivalence with v(S) = 0. As this v(S) is reduced, it is then the reduced
form of v(S) and this is the meaning of inessentiality.
(27 :D) The game F is inessential if and only if its characteristic
function v(S) has always = in (25:3:c) of 25.3.1.; i.e. when
V (S u T) = v(S) + v(T) if S n T = 0.
Proof: Necessity: A v(S) of the form given in (27 :C) above obviously
possesses this property.
Sufficiency: Repeated application of this equation gives = in (25:5) of
25.4.1.; i.e.
v(Si u    u Sp) = v(SO + + v(Sp)
if Si, , S p are pairwise disjunct.
Consider an arbitrary S, say S = (fci, , k p ). Then Si = (fci), ,
S P = (k p ) give
v(S) = v((*0) + + v((*,)).
So we have
v(S) = 5) a k
km 8
with a? = v((l)), , a = v((n)) and so F is inessential by (27 :C).
27.4.3. Both criteria (27 :C) and (27 :D) express that the values of all
coalitions arise additively from those of their constituents. 1 It will be
remembered what role the additivity of value, or rather its frequent absence,
has played in economic literature. The cases in which value is not gen
erally additive were among the most important, but they offered sig
nificant difficulties to every theoretical approach; and one cannot say
that these difficulties have ever been really overcome. In this connection
one should recall the discussions of concepts like complementarity, total
value, imputation, etc. We are now getting into the corresponding phase
of our theory; and it is significant that we find additivity only in the unin
c
1 The reader will understand that we are using the word "value" (of the coalition S)
for the quantity v(S).
252
GENERAL THEORY: ZEROSUM nPERSONS
teresting (inessential) case, while the really significant (essential) games
have a nonadditive characteristic function. 1
Those readers who are familiar with the mathematical theory of measure
will make this further observation: the additive v(S) i.e. the inessential
games are exactly the measure functions of /, which give I the total
measure zero. Thus the general characteristic functions v(S) are a new
generalization of the concept of measure. These remarks are in a deeper
sense connected with the preceding ones concerning economic value. How
ever, it would lead too far to pursue this subject further. 2
27.5. The Inequalities in the Essential Case
27.5.1. Let us return to the inequalities of 27.2., in particular to (27:7),
(27:7*), (27:7**). For 7 = (inessential case) everything is trivially
clear. Assume therefore that y > (essential case).
Figure 50.
Abscissa: p, number of elements of S. Dot at 0, 7, 7, or heavy line: Range of
possible values v(S) for the S with the corresponding p.
Now (27:7), (27:7*), (27:7**) set a range of possible values for v(S)
for every number p of elements in S. This range is pictured for each
p = 0, 1, 2,  , n 2, n 1, n in Figure 50.
We can add the following remarks:
27.5.2. First: It will be observed that in an essential game i.e. when
7 > necessarily n ^ 3. Otherwise the formulae (27:7), (27:7*),
(27:7**) or Figure 50, which expresses their content lead to a conflict:
For n = 1 or 2 an (n l)element set S has or 1 elements, hence its
1 We are, of course, concerned at this moment only with a particular aspect of the
subject: we are considering values of coalitions only i.e. of concerted acts of behavior
and not of economic goods or services. The reader will observe, however, that the spe
cialization is not as far reaching as it may seem : goods and services stand really for the
economic act of their exchange i.e. for a concerted act of behavior.
*The theory of measure reappears in another connection. Cf. 41.3.3.
INESSENTIAL AND ESSENTIAL GAMES 253
v(S) must on the one hand be 7, and on the other hand or 7, which is
impossible. 1
Second: For the smallest possible number of participants in an essential
game, i.e. for n = 3, the formulae (27:7), (27:7*), (27:7**) or Figure 50
determine everything: they state the values of v(S) for 0, 1, n 1, n~element
sets S; and for n = 3 the following are all possible element numbers: 0, 1, 2,
3. (Cf. also a remark in footnote 1 on p. 250.) This is in harmony with
the fact which we found in 23.1.3., according to which there exists only
one type of essential zerosum threeperson games.
Third: For greater numbers of participants, i.e. for n ^ 4, the problem
assumes a new complexion. As formulae (27:7), (27:7*), (27:7**) or
Figure 50 show, the element number of p of the set S can now have
other values than 0, 1, n 1, n. I.e. the interval
(27:9) 2 g p ^ n  2
now becomes available. 2 It is in this interval that the above formulae no
longer determine a unique value of v(S); they set for it only the interval
(27:7) P7 ^ v(S) S (n  p) 7 ,
the length of which is ny for every p (cf. again Figure 50).
27.6.3. In this connection the question may be asked whether really
the entire interval (27:7) is available, i.e. whether it cannot be narrowed
further by some new, more elaborate considerations concerning v(S).
The answer is: No. It is actually possible to define for every n 4 a
single game F p in which, for each p of (27:9), v(5) assumes both values
p7 and (n p)y for suitable pelement sets S. It may suffice to mention
the subject here without further elaboration.
To sum up: The real ramifications of the theory of games appear only
when n ^ 4 is reached. (Cf. footnote 1 on p. 250, where the same idea
was expounded.)
27.6. Vector Operations on Characteristic Functions
27.6.1. In concluding this section some remarks of a more formal nature
seem appropriate.
The conditions (25:3:a)(25:3:c) in 25.3.1., which describe the charac
teristic function v(S), have a certain vectorial character: they allow ana
logues of the vector operations, defined in 16.2. 1., of scalar multiplication, and
of vector addition. More precisely:
Scalar multiplication: Given a constant t ^ and a characteristic func
tion v(S), then tv(S) = u(S) is also a characteristic function. Vector
addition: Given two characteristic functions v(S), w(S); 3 then
1 Of course, in a zerosum oneperson game nothing happens at all, and for the zero
sum twoperson games we have a theory in which no coalitions appear. Hence the
inessentiality of all these cases is to be expected.
s It has n 3 elements; and this number is positive as soon as n ^ 4.
8 Everything here must refer to the same n and to the same set of players
/  (1, 2,    , n).
254 GENERAL THEORY: ZEROSUM nPERSONS
v(S) + w(S) z(S) is also a characteristic function. The only difference
from the corresponding definitions of 16.2. is that we had to require t S> O. 1  2
27.6.2. The two operations defined above allow immediate practical
interpretation:
Scalar multiplication: If t = 0, then this produces u(/S) as 0, i.e. the
eventless game considered in 27.3.1. So we may assume t > 0. In this
case our operation amounts to a change of the unit of utility, namely to its
multiplication by the factor t.
Vector addition: This corresponds to the superposition of the games
corresponding to v(S) and to w(S). One would imagine that the same
players 1, 2, , n are playing these two games simultaneously, but
independently. I.e., no move made in one game is supposed to influence
the other game, as far as the rules are concerned. In this case the charac
teristic function of the combined game is clearly the sum of those of the two
constituent games. 3
27.6.3. We do not propose to enter upon a systematic investigation of
these operations, i.e. of their influence upon the strategic situations in the
games which they affect. It may be useful, however, to make some remarks
on this subject without attempting in any way to be exhaustive.
We observe first that combinations of the operations of scalar multiplica
tion and vector addition also can now be interpreted directly. Thus the
characteristic function
(27:10) z(S) = tv(S) + w(5)
belongs to the game which arises by superposition of the games of v(S) and
w(S) if their units of utility are first multiplied by t and s respectively.
If s = 1 t y then (27:10) corresponds to the formation of the center of
gravity in the sense of (16:A:c) in 16.2.1.
It will appear from the discussion in 35.3.4. (cf. in particular footnote 1
on p. 304 below) that even this seemingly elementary operation can have
very involved consequences as regards strategy.
We observe next that there are some cases where our operations have no
consequences in strategy.
First, the scalar multiplication by a t > alone, being a mere change in
unit, has no such consequences.
indeed, t < would upset (25:3:c) in 25.3.1. Note that a multiplication of the
original 3C*(n, , r n ) with a t < would be perfectly feasible. It is simplest to
consider a multiplication by t 1, i.e. a change in sign. But a change of sign of the
3C*(Ti, , r) does not at all correspond to a change of sign of the v(S). This should be
clear by common sense, as a reversal of gains and losses modifies all strategic considera
tions in a very involved way. (This reversal and some of its consequences are familiar
to chess players.) A formal corroboration of our assertion may be found by inspecting
the definitions of 25.1.3.
* Vector spaces with this restriction of scalar multiplication are sometimes called
positive vector spaces. We do not need to enter upon their systematic theory.
3 This should be intuitively obvious. An exact verification with the help of 25.1.3.
involves a somewhat cumbersome notation, but no real difficulties.
GROUPS, SYMMETRY AND FAIRNESS 255
Second and this is of greater significance the strategic equivalence
discussed in 27.1. is a superposition: we pass from the game of v(S) to the
strategically equivalent game of v'(S) by superposing on the former an
inessential game. 1 (Cf. (27:1) and (27:2) in 27.1.1. and, concerning inessen
tiality, 27.3.1. and (27 :C) in 27.4.2.) We may express this in the following
way: we know that an inessential game is one in which coalitions play no
role. The superposition of such a game on another one does not disturb
strategic equivalence, i.e. it leaves the strategic structure of that game
unaffected.
28. Groups, Symmetry and Fairness
28.1. Permutations, Their Groups, and Their Effect on a Game
28.1.1. Let us now consider the role of symmetry, or more generally, the
effects of interchanging the players !,, n or their numbers in an
nperson game F. This will naturally be an extension of the corresponding
study made in 17.11. for the zerosum twoperson game.
This analysis begins with what is in the main a repetition of the steps
taken in 17.11. for n = 2. But since the interchanges of the symbols
1, , n offer for a general n many more possibilities than for n = 2, it is
indicated that we should go about it somewhat more systematically.
Consider the n symbols 1, , n. Form any permutation P of these
symbols. P is described by stating for every i = 1, , n, into which i p
(also = 1, , n), P carries it. So we write:
(28:1) P:i*i p ,
or by way of complete enumeration:
Among the permutations some deserve special mention:
(28:A:a) The identity I n which leaves every i(= 1, * , n) un
changed:
i  i ln = i.
(28:A:b) Given two permutations P, Q, their product PQ, which
consists in carrying out first P and then Q:
1 With the characteristic function w(S) 2} <** then i n our above notatibns
fcinS
v'OSf)  vGSf) +w(S)
(1 2\
2* I/
The identity (cf. below) is /  ( !' ' ' ' ' ' *\
\i, z, , n/
256 GENERAL THEORY: ZEROSUM nPERSONS
The number of all possible permutations is the factorial of n r
n\ = 1 2 . . . n,
and they form together the symmetric group of permutations 2 n . Any
subsystem G of 2 n which fulfills these two conditions:
(28:A:a*) / n belongs to G,
(28:A:b*) PQ belongs to G if P and Q do,
is a group of permutations. 1
A permutation P carries every subset S of / = (!, , n) into another
subset S p *
28.1.2. After these general and preparatory remarks we now proceed
to apply their concepts to an arbitrary nperson game F.
Perform a permutation P on the symbols 1, , n denoting the players
of F. I.e. denote the player k = 1, , n by k p instead of fc; this trans
forms the game F into another game F p . The replacement of F by F p must
make its influence felt in two respects: in the influence which each player
exercises on the course of the play, i.e. in the index k of the variable r k
which each player chooses; and in the outcome of the play for him, i.e. in
the index k of the function 3C* which expresses this. 8 So T p is again in the
normalized form, with functions 3C(ri, , r n ), k = 1, , n. In
expressing 3C(ri, , r n ) by means of 3C*(ri, , r n ), we must remem
ber: the player k in F had 3C*; now he is k p in F p , so he has3C>. If we form
3C* with the variables TI, , r n , then we express the outcome of the
game T p when the player whose designation in F p is k chooses r*. So the
player k in F who is k p in T p chooses T*/>. So the variables in 3C* must be
TIP, * , r n p. We have therefore:
(28:3)
1 For the important and extensive theory of groups compare L. C. Mathewson:
Elementary Theory of Finite Groups, Boston 1930; W. Burnside: Theory of Groups of
Finite Order, 2nd Ed. Cambridge 1911; A. Speiser: Theorie der Gruppen von endlicher
Ordnung, 3rd Edit. Berlin 1937.
We shall not need any particular results or concepts of group theory, and mention
the above literature only for the use of the reader who may want to acquire a deeper
insight into that subject.
Although we do not wish to tie up our exposition with the intricacies of group theory,
we nevertheless introduced some of its basic termini for this reason : a real understanding
of the nature and structure of symmetry is not possible without some familiarity with
(at least) the elements of group theory. We want to prepare the reader who may want to
proceed in this direction, by using the correct terminology.
For a fuller exposition of the relationship between symmetry and group theory, cf.
H. Weyl: Symmetry, Journ. Washington Acad. of Sciences, Vol. XXVIII (1938), pp.
253ff.
*If S  (* , *,), then S p  (ibf, , k p ).
8 Cf . the similar situation for n = 2 in footnote 1 on p. 109.
4 The reader will observe that the superscript P for the index k of the functions 3C
themselves appears on the lefthand side, while the superscript P for the indices k of the
variables r* appear on the righthand side. This is the correct arrangement; and the
argument preceding (28:3) was needed to establish it.
The importance of getting this point faultless and clear lies in the fact that we could
GROUPS, SYMMETRY AND FAIRNESS 257
Denote the characteristic functions of T and T p by v(S) and v p (S)
respectively. Since the players, who form in T p the set S p , are the same ones
who form in T the set S, we have
(28:4) v p (S p ) = v(5) for every S. 1
28.1.3. If (for a particular P) T coincides with T p , then we say that T is
invariant or symmetric with respect to P. By virtue of (28:3) this is
expressed by
(28:5) 3Mn, , r n ) ^ 3C*(nP, , T ).
When this is the case, then (28:4) becomes
(28:6) v(8 p ) = v(S) for every S.
Given any T, we can form the system G r of all P with respect to which T
is symmetric. It is clear from (28:A:a), (28:A:b) above, that the identity 7 n
belongs to Gr, and that if P, Q belong to G r , then their product PQ does too.
So Gr is a group by (28:A:a*), (28:A:b*) above. We call Gr the invariance
group of F.
Observe that (28:6) can now be stated in this form:
(28:7) v(S) = v(r) if there exists a P in Gr with S p = T,
i.e. which carries S into T.
The size of Gr i.e. the number of its elements gives some sort of a
measure of "how symmetric " F is. If every permutation P (other than
identity 7 n ) changes T, then G r consists of 7 n alone, r is totally unsymmetric.
If no permutation P changes T, then G r contains all P, i.e. it is the sym
metric group S n , F is totally symmetric. There are, of course, numerous
intermediate cases between these two extremes, and the precise structure
of T's symmetry (or lack of it) is disclosed by the group G r .
28.1.4. The condition after (28:7) implies that S and T have the same
number of elements. The converse implication, however, need not be
true if Gr is small enough, i.e. if F is unsymmetric enough. It is therefore
not otherwise be sure that successive applications of the superscripts P and Q (in this
order) to r will give the same result as a (single) application of the superscript PQ to r.
The reader may find the verification of this a good exercise in handling the calculus of
permutations.
For n 2 and P I ' . J , application of P on either side had the same effect, so it is
not necessary to be exhaustive on this point. Cf . footnote 1 on p. 109.
6 In the zerosum twoperson game, 3C 3C t B 3C 2 , and similarly 3C F s 3Cf = 3Cf .
Hence in this case (cf . above, n  2 and P  \\ ' ^ j J (28:3) becomes 3C p (n, n) 3C(ri, n).
This is in accord with the formulae of 14.6. and 17.11.2.
But this simplification is possible only in the zerosum twoperson game; in all
other cases we must rely upon the general formula (28:3) alone.
1 This conceptual proof is clearer and simpler than a computational one, which could
be based on the formulae of 25.1.3. The latter, however, would cause no difficulties
either, only more extensive notations.
258 GENERAL THEORY: ZEROSUM nPERSONS
of interest to consider those groups G = Gr which permit this converse
implication, i.e. for which the following is true:
(28:8) If S, T have the same number of elements, then there exists
a P in G with S p = T, i.e. which carries S into T.
This condition (28 :8) is obviously satisfied when G is the symmetric group
S n , i.e. for the G = Gr = S n of a totally symmetric F. It is also satisfied
for certain smaller groups, i.e. for certain F of less than total symmetry. 1
28.2. Symmetry and Fairness
28.2.1. At any rate, whenever (28:8) holds for G = Gr, we can conclude
from (28:7):
(28:9) v(S) depends only upon the number of elements in S.
That is:
(28:10) v(S) = v p
where p is the number of elements in /S, (p 0, 1, , ri).
Consider the conditions (25:3:a)(25:3:c) in 25.3.1., which give an
exhaustive description of all characteristic functions v(S). It is easy to
rewrite them for v p when (28:10) holds. They become:
(28:ll:a) v = 0,
(28:ll:b) v n _ p = v p ,
(28:ll:c) v p+q ^ v p + v q for p + q g n.
(27:3) in 27.1.4. is clearly a consequence of (28:10) (i.e. of (28:9)),
so that such a v(S) is automatically reduced, with 7 = VL We have
therefore, in particular, (27:7), (27:7*), (27:7**) in 27.2., i.e. the conditions
of Figure 50.
Condition (28:ll:c) can be rewritten, by a procedure which is parallel
to that of (25: A) in 25.4.2.
(1 2\
2' 1 /
cf. several preceding references); so G = 2 n is the only possibility of any symmetry.
Consider therefore n 3, and call G settransitive if it fulfills (28:8). The question,
which G T* S n are then settransitive, is of a certain grouptheoretical interest, but we
need not concern ourselves with it in this work.
For the reader who is interested in group theory we nevertheless mention:
There exists a subgroup of S n which contains half of its elements (i.e. in!), known as
the alternating group O. This group is of great importance in group theory and has been
extensively discussed there. For n ^ 3 it is easily seen to be settransitive too.
So the real question is this: for which n < 3 do there exist settransitive groups
G * S; a?
It is easy to show that for n = 3, 4 none exist. For n ** 5, 6 such groups do exist.
(For n 5 a settransitive group G with 20 elements exists, while S 6 , as have 120, 60
elements respectively. For n = 6 a settransitive group G with 120 elements exists,
while Se, tte have 720, 360 elements respectively.) For n 7, 8 rather elaborate group
theoretical arguments show that no such groups exist. For n 9 the question is still
open. It seems probable that no such groups exist for any n > 9, but this assertion has
not yet been established for all these n.
GROUPS, SYMMETRY AND FAIRNESS 269
Put r = n p q\ then (28:ll:b) permits us to state (28:ll:c) as
follows:
(28:ll:c*) v p + v, + v r ^ if p + q + r = n.
Now (28:ll:c*) is symmetric with respect to p, g, r; 1 hence we may make
p ^ q ^ r by an appropriate permutation. Furthermore, when p =
(hence r= n  g), then (28:11 :c*) follows from (28:11 :a), (28:11 :b)
(even with =). Thus we may assume p 7*. o. So we need to require
(28:ll:c*) only for 1 g p ^ g g r, and therefore the same is true for
(28:ll:c). Observe finally that, as r = n p q, the inequality g ^ r
means p + 2q ^ n. We restate this:
(28:12) It suffices to require (28:ll:c) only when
1 ^ P ^ q, p + 2g ^ n. 2
28.2.2. The property (28:10) of the characteristic function is a conse
quence of symmetry, but this property is also important in its own right.
This becomes clear when we consider it in the simplest possible special case:
for n = 2.
Indeed, for n = 2 (28:10) simply means that the v' of 17.8.1. vanishes. 8
This means in the terminology of 17.11.2., that the game T is fair. We
extend this concept: The nperson game F is fair when its characteristic
function v(S) fulfills (28:9), i.e. when it is a v p of (28:10). Now, as in
17.11.2., this notion of fairness of the game embodies what is really essential
in the concept of symmetry. It must be remembered, however, that the
concept of fairness and similarly that of total symmetry of the game
may or may not imply that all individual players can expect the same fate
in an individual play (provided that they play well). For n = 2 this
implication did hold, but not for n ^ 3 ! (Cf. 17.11.2. for the former, and
footnotes 1 and 2 on p. 225 for the latter.)
28.2.3. We observe, finally, that by (27:7), (27:7*), (27:7**) in 27.2., or
by Figure 50, all reduced games are symmetric and hence fair, when n = 3,
but not when n ^ 4. (Cf. the discussion in 27.5. 2.) Now the unrestricted
zerosum nperson game is brought into its reduced form by the fixed
extra payments i, , a n (to the players 1, , n, respectively), as
described in 27.1. Thus the unfairness of a zerosum threeperson game
i.e. what is really effective in its asymmetry is exhaustively expressed
by these i, 2, 8 ; that is, by fixed, definite payments. (Cf. also the
"basic values/ 7 a', 6', c' of 22.3.4.) In a zerosum nperson game with
1 Both in its assertion and in its hypothesis!
1 These inequalities replace the original p + q ^ n; they are obviously much stronger.
As they imply 3p ^ p + 2g ^ n and 1 f 2q ^ p f 2g ^ n, we have
n n  1
P ^ 3' q * ^~"
8 By definition v' v((l)) v((2)). For n 2 the only essential assertion of
(28:9) (which is equivalent to (28:10)) is v((l)) v((2)). Due to the above, this means
precisely that v'  v', i.e. that v'  0.
260 GENERAL THEORY: ZEROSUM nPERSONS
n ^ 4, this is no longer always possible, since the reduced form need
not be fair. That is, there may exist, in such a game, much more funda
mental differences between the strategic positions of the players, which
cannot be expressed by the !,, a n , i.e. by fixed, definite payments.
This will become amply clear in the course of Chapter VII. In the same
connection it is also useful to recall footnote 1 on p. 250.
29. Reconsideration of the Zerosum Threeperson Game
29.1. Qualitative Discussion
29.1.1. We are now prepared for the main undertaking: To formulate
the principles of the theory of the zerosum nperson game. l The character
istic function v(S), which we have defined in the preceding sections, provides
the necessary tool for this operation.
Our procedure will be the same as before: We must select a special case
to serve as a basis for further investigation. This shall be one which we
have already settled and which we nevertheless deem sufficiently charac
teristic for the general case. By analyzing the (partial) solution found
in this special case, we shall then try to crystallize the rules which should
govern the general case. After what we said in 4.3.3. and in 25.2.2., it
ought to be plausible that the zerosum threeperson game will be the special
case in question.
29.1.2. Let us therefore reconsider the argument by which our present
solution of the zerosum threeperson game was obtained. Clearly the
essential case will be the one of interest. We know now that we may as
well consider it in its reduced form, and that we may also choose y = I. 2
The characteristic function in this case is completely determined, as dis
cussed in the second case of 27.5.2.:
(29:1) v(5)
1
1
when S has
elements.
3
We saw that in this game everything is decided by the (twoperson)
coalitions which form, and our discussions 4 produced the following main
conclusions:
Three coalitions may form, and accordingly the three players will
finish the play with the following results:
1 Of course the general nperson game will still remain, but we shall be able to solve
it with the help of the zerosum games. The greatest step is the present one : the passage
to the zerosum nperson games.
Cf. 27.1.4. and 27.3.2.
1 In the notation of 23.1.1. this means a b c 1. The general parts of the
discussions referred to were those in 22.2., 22.3., 23. The above specialization takes us
actually back to the earlier (more special) case of 22.1. So our considerations of 27.1.
(on strategic equivalence and reduction) have actually this effect in the zerosum three
person games: they cany the general case back into the preceding special one, as stated
above.
4 In 22.2.2., 22.2.3.; but these are really just elaborations of those in 22.1.2., 22.1.3.
RECONSIDERATION
261
(1,2)
(1,3)
(2,3)
1
1
Figure 51.
This " solution " calls for interpretation, and the following remarks suggest
themselves in particular: 1
29.1.3.
(29:A:a) The three distributions specified above correspond to all
strategic possibilities of the game.
(29:A:b) None of them can be considered a solution in itself; it is the
system of all three and their relationship to each other which
really constitute the solution.
(29:A:c) The three distributions possess together, in particular, a
"stability" to which we have referred thus far only very
sketchily. Indeed no equilibrium can be found outside of
these three distributions; and so one should expect that any
kind of negotiation between the players must always in fine
lead to one of these distributions.
(29:A:d) Again it is conspicuous that this "stability" is only a
characteristic of all three distributions viewed together. No
one of them possesses it alone; each one, taken by itself, could
be circumvented if a different coalition pattern should spread
to the necessary majority of players.
29.1.4. We now proceed to search for an exact formulation of the
heuristic principles which lead us to the solutions of Figure 51, always
keeping in mind the remarks (29:A:a)(29:A:d).
A more precise statement of the intuitively recognizable "stability"
of the system of three distributions in Figure 51 which should be a concise
summary of the discussions referred to in footnote 4 on p. 260 leads us
back to a position already taken in the earlier, qualitative discussions. 2
It can be put as follows:
(29:B:a) If any other scheme of distribution should be offered for
consideration to the three players, then it will meet with
1 These remarks take up again the considerations of 4.3.3. In connection with
(29;A:d) the second half of 4.6.2. may also be recalled.
* These viewpoints permeate all of 4.4.4.G., but they appear more specifically in
4.4. Land 4.6.2.
262 GENERAL THEORY: ZEROSUM nPERSONS
rejection for the following reason: a sufficient number of
players 1 prefer, in their own interest, at least one of the dis
tributions of the solution (i.e. of Figure 51), and are con
vinced or ean be convinced 2 of the possibility of obtaining the
advantages of that distribution.
(29:B:b) If, however, one of the distributions of the solution is
offered, then no such group of players can be found.
We proceed to discuss the merits of this heuristic principle in a more
exact way.
29.2. Quantitative Discussion
29.2.1. Suppose that 0i, 2 , 0s is a possible method of distribution
between the players 1,2,3. I.e.
ft + j8> + 0s = 0.
Then, since by definition v((i))(= 1) is the amount that player i can get
for himself (irrespective of what all others do), he will certainly block any
distribution with ft < v((z)). We assume accordingly that
We may permute the players 1,2,3 so that
Pi ^ A g; ft.
Now assume 2 < i Then a fortiori # 3 < . Consequently the players
2,3 will both prefer the last distribution of Figure 51, 8 where they both get
the higher amount i. 4 Besides, it is clear that they can get the advantage of
that distribution (irrespective of what the third player does), since the
amounts i, i which it assigns to them do not exceed together v((2, 3)) = 1.
If, on the other hand, /3 2 ^ i, then a fortiori 0i ^ . Since 8 ^ 1,
this is possible only when /3i = 2 = i, 0a = 1, i.e. when we have the first
distribution of Figure 51. (Cf. footnote 3 above.)
1 Of course, in this case, two.
1 What this "convincing" means was discussed in 4.4.3. Our discussion which
follows will make it perfectly clear.
3 Since we made an unspecified permutation of the players 1,2,3 the last distribution
of Fig. 51 really stands for all three.
4 Observe that each one of these two players profits by such a change separately and
individually. It would not suffice to have only the totality (of these two) profit. Cf.,
e.g., the first distribution of Fig. 51 with the second; the players 1,3 as a totality would
profit by the change from the former to the latter, and nevertheless the first distribution
is just as good a constituent of the solution as any other.
In this particular change, player 3 would actually profit (getting i instead of 1),
and for player 1 the change is indifferent (getting  in both cases). Nevertheless player 1
will not act unless further compensations are made and these can be disregarded in this
connection. For a more careful discussion of this point, cf. the last part of this section.
EXACT FORM OF THE GENERAL DEFINITIONS 263
This establishes (29:B:a) at the end of 29.1.4. (29:B:b) loc. cit. is
immediate: in each of the three distributions of Figure 51 there is, to be
sure, one player who is desirous of improving his standing, 1 but since there is
only one, he is not able to do so. Neither of his two possible partners gains
anything by forsaking his present ally and joining the dissatisfied player:
already each gets i, and they can get no more in any alternative distribution
of Figure 51. 2
29.2.2. This point may be clarified further by some heuristic elaboration.
We see that the dissatisfied player finds no one who desires spontaneously
to be his partner, and he can offer no positive inducement to anyone to
join him; certainly none by offering to concede more than i from the proceeds
of their future coalition. The reason for regarding such an offer as ineffec
tive can be expressed in two ways: on purely formal grounds this offer may
be excluded because it corresponds to a distribution which is outside the
scheme of Figure 51; the real subjective motive for which any prospective
partner would consider it unwise 3 to accept a coalition under such conditions
is most likely the fear of subsequent disadvantage, there may be further
negotiations preceding the formation of a coalition, in which he would be
found in a particularly vulnerable position. (Cf. the analysis in 22.1.2.,
22.1.3.)
So there is no way for the dissatisfied player to overcome the indifference
of the two possible partners. We stress: there is, on the side of the two
possible partners no positive motive against a change into another distribu
tion of Figure 51, but just the indifference characteristic of certain types of
stability. 4
30. The Exact Form of the General Definitions
30.1. The Definitions
30.1.1. We return to the case of the zerosum nperson game F with
general n. Let the characteristic function of T be v(S).
We proceed to give the decisive definitions.
In accordance with the suggestions of the preceding paragraphs we
mean by a distribution or imputation a set of n numbers i, , a with
'the following properties
(30:1) a, ^ v((0) for i = 1,   , n,
(30:2) a, = 0.
il
1 The one who gets 1.
1 The reader may find it a good exercise to repeat this discussion with a general (not
reduced) v(S), i.e. with general a, 6, c, and the quantities of 22.3.4. The result is the
same; it cannot be otherwise, since our theory of strategic equivalence and reduction is
correct. (Cf. footnote 3 on p. 260.)
8 Or unsound, or unethical.
4 At every change from one distribution of Fig. 51 to another, one player is definitely
against, one definitely for it; and so the remaining player blocks the change by his
indifference.
264 GENERAL THEORY: ZEROSUM nPERSONS
It may be convenient to view these systems !,, as vectors in the
ndimensional linear space L n in the sense of 16.1 2.:
A set S (i.e. a subset of / = 1, , n) is called effective for the imputa
tion a , if
(30:3) "* * vO 81 )
iinS
 r 
An imputation a. dominates another imputation ft , in symbols
if there exists a set S with the following properties:
(30:4:a) S is not empty,
(30:4:b) S is effective for a ,
(30:4:c) <* > ft for all i in S.
A set V of imputations is a solution if it possesses the following properties :
(30:5:a) No ft in V is dominated by an a in V,
(30:5:b) Every ft not in V is dominated by some a in V.
(30:5:a) and (30:5:b) can be stated as a single condition:
(30:5:c) The elements of V are precisely those imputations which
. _ 4 _ ^
are undominated by any element of V.
(Cf. footnote 1 on p. 40.)
30.1.2. The meaning of these definitions can, of course, be visualized
when we recall the considerations of the preceding paragraphs and also of
the earlier discussions of 4.4.3.
To begin with, our distributions or imputations correspond to the more
intuitive notions of the same name in the two places referred to. What we
call an effective set is nothing but the players who "are convinced or can be
convinced" of the possibility of obtaining what they are offered by a ; cf.
again 4.4.3 and (29:B:a) in 29.1.4. The condition (30:4:c) in the definition
of domination expresses that all these players have a positive motive for
> >
preferring a to ft . It is therefore apparent that we have defined domi
nation entirely in the spirit of 4.4.1., and of the preference described by
(29:B:a) in 29.1.4.
The definition of a solution agrees completely with that given in 4.5.3.,
as well as with (29:B:a), (29:B:b) in 29.1.4.
EXACT FORM OF THE GENERAL DEFINITIONS 265
30.2. Discussion and Recapitulation
30.2.1. The motivation for all these definitions has been given at the
places to which we referred in detail in the course of the last paragraph.
We shall nevertheless reemphasize some of their nlain features particu
larly the concept of a solution.
We have already seen in 4.6. that our concept of a solution of a game
corresponds precisely to that of a " standard of behavior" of everyday
parlance. Our conditions (30:5:a), (30:5:b), which correspond to the
conditions (4:A:a), (4:A:b) of 4.5.3., express just the kind of "inner sta
bility" which is expected of workable standards of behavior. This was
elaborated further in 4.6. on a qualitative basis. We can now reformulate
those ideas in a rigorous way, considering the exact character which the
discussion has now assumed. The remarks we wish to make are these: 1
30.2.2.
(30:A:a) Consider a solution V. We have not excluded for an
imputation ft in V the existence of an outside imputation a '
(not in V) with a ' H ft . 2 If such an a ' exists, the attitude
of the players must be imagined like this: If the solution V
(i.e. this system of imputations) is " accepted " by the players 1,
, n, then it must impress upon their minds the idea that
only the imputations ft in V are " sound" ways of distribution.
An a ' not in V with a ' H ft , although preferable to an
effective set of players, will fail to attract them, because it is
"unsound." (Cf. our detailed discussion of the zerosum three
person game, especially as to the reason for the aversion of each
player to accept more than the determined amount in a coali
tion. Cf . the end of 29.2. and its references.) The view of the
>
" unsoundness " of a ' may also be supported by the existence
>
of an a in V with a H a ' (cf. (30:A:b) below). All these
arguments are, of course, circular in a sense and again
depend on the selection of V as a "standard of behavior," i.e.
as a criterion of "soundness." But this sort of circularity is
not unfamiliar in everyday considerations dealing with
"soundness."
ir The remarks (30:A:a)(30:A:d) which follow are a more elaborate and precise
presentation of the ideas of 4.6.2. Remark (30:A:e) bears the same relationship to
4.6.3.
1 Indeed, we shall see in (31 :M) of 31.2.3. that an imputation , for which never
>
a ' H , exists only in inessential games.
266 GENERAL THEORY: ZEROSUM nPERSONS
(30:A:b) If the players 1, , n have accepted the solution V as a
" standard of behavior/ 1 then the ability to discredit with the
help of V (i.e. of its elements) any imputation not in Vi is
necessary in order to maintain their faith in V. Indeed, for
every outside a ' (not in V) there must exist an a in V with
a H a '. (This was our postulate (30:5:b).)
(30:A:c) Finally there must be no inner contradiction in V> i.e. for
a , ft in V, never a H ft . (This was our other postulate
(30:5:a).)
(30:A:d) Observe that if domination, i.e. the relation H , were transi
tive, then the requirements (30:A:b) and (30:A:c) (i.e. our
postulates (30:5:a) and (30:5:b)) would exclude the rather
delicate situation in (30:A:a). Specifically: In the situation of
(30:A:a), ft belongs to V, a ' does not, and a ' * ft . By
(30:A:b) there exists an a in V so that a H a '. Now if
domination were transitive we could conclude that a H ft ,
which contradicts (30:A:c) since a , ft both belong to V
(30:A:e) The above considerations make it even more clear that only
V in its entirety is a solution and possesses any kind of stability
but none of its elements individually. The circular char
acter stressed in (30:A:a) makes it plausible also that several
solutions V may exist for the same game. I.e. several stable
standards of behavior may exist for the same factual situation.
Each of these would, of course, be stable and consistent in itself,
but in conflict with all others. (Cf. also the end of 4.6.3. and
the end of 4.7.)
In many subsequent discussions we shall see that this multiplicity of
solutions is, indeed, a very general phenomenon.
30.3. The Concept of Saturation
30.3.1. It seems appropriate to insert at this point some remarks of a
more formalistic nature. So far we have paid attention mainly to the
meaning and motivation of the concepts which we have introduced, but the
notion of solution, as defined above, possesses some formal features which
deserve attention.
The formal logical considerations which follow will be of no imme
diate use, and we shall not dwell upon them extensively, continuing after
wards more in the vein of the preceding treatment. Nevertheless we deem
that these remarks are useful here for a more complete understanding of the
structure of our theory. Furthermore, the procedures to be used here will
have an important technical application in an entirely different connection
in 51.1.51.4.
EXACT FORM OF THE GENERAL DEFINITIONS 267
30.3.2. Consider a domain (set) D for the elements x, y of which a certain
relation x&y exists. The validity of (R between two elements x, y of D is
expressed by the formula x&y. 1 (R is defined by a statement specif ying
unambiguously for which pairs x, y of D, xfay is true, and for which it is not.
If xGiy is equivalent to 2/(Rrc, then we say that xfay is symmetric. For any
relation (R we can define a new relation (R 5 by specifying x(R 8 y to mean the
conjunction of xfay and yfax. Clearly (R 5 is always symmetric and coincides
with (R if and only if (R is symmetric. We call (R a the symmetrized form of (R. 2
We now define:
(30:B:a) A subset A of D is (Sisatisfactory if and only if x(Ri/ holds
for all x, y of A.
(30:B:b) A subset A of D and an element y of D are (Sicompatible
if and only if x(Ry holds for all x ot A.
From these one concludes immediately:
(30:C:a) A subset A of D is (Sisatisfactory if and only if this is true:
The y which are (Rcompatible with A form a superset of A.
We define next:
(30:C:b) A subset A of D is (Sisaturated if and only if this is true:
The y which are (Rcompatible with A form precisely the set A .
Thus the requirement which must be added to (30 :C :a) in order to secure
(30:C:b) is this:
(30 :D) If y is not in A, then it is not (Rcompatible with A; i.e.
there exists an x in A such that not x(Ry.
Consequently (Rsaturation may be equivalently defined by (30:B:a) and
(30:D).
30.3.3. Before we investigate these concepts any further, we give some
examples. The verification of the assertions made in them is easy and will
be left to the reader.
First: Let D be any set and x&y the relation x = y. Then (Rsatis
factoriness of A means that A is either empty or a oneelement set, while
(Rsaturation of A means that A is a oneelement set.
Second: Let D be a set of real numbers and x(Siy the relation x ^ t/. 3
Then (Rsatisfactoriness of A means the same thing as above, 4 while (Rsat
uration of A means that A is a oneelement set, consisting of the greatest
element of D. Thus there exists no such A if D has no greatest element
1 It is sometimes more convenient to use a formula of the form (R(or, y), but for our
purposes x&y is preferable.
2 Some examples: Let D consist of ail real numbers. The relations x y and x 9* y
are symmetric. None of the four relations x y, x ^ y, x < y, x > y \R symmetric.
The symmetrized form of the two former is x y (conjunction of x ^ y and x y),
the symmetrized form of the two latter is an absurdity (conjunction of x < y and x > y).
8 D could be any other set in which such a relation is defined, cf . the second example in
65.4.1.
4 Cf . footnote 1 on p. 268.
268 GENERAL THEORY: ZEROSUM nPERSONS
(e.g. for the set of all real numbers) and A is unique if D has a greatest
element (e.g. when it is finite).
Third : Let D be the plane and xfay express that the points #, y have the
same height (ordinate). Then (Rsatisfactoriness of A means that all
points of A have the same height, i.e. lie on one parallel to the axis of
abscissae. (Rsaturation means that A is precisely a line parallel to the axis
of abscissae.
Fourth: Let D be the set of all imputations, and x<S(.y the negation of the
domination x H y. Then comparison of our (30:B:a), (30:D) with (30:5:a),
(30:5:b) in 30.1.1., or equally of (30:C:b) with (30:5:c) id. shows: (Rsatura
tion of A means that A is a solution.
30*3.4. One look at the condition (30:B:a) suffices to see that satis
factoriness for the relation x<S(y is the same as for the relation yfax and so
also for their conjunction x(& 8 y. In other words: (Rsatisfactoriness is the
same thing as (R a satisfactoriness.
Thus satisfactoriness is a concept which need be studied only on sym
metric relations.
This is due to the x, y symmetric form of the definitory condition (30 :B :a) .
The equivalent condition (30:C:a) does not exhibit this symmetry, but of
course this does not invalidate the proof.
Now the definitory condition (30:C:b) for (Rsaturation is very similar
in structure to (30:C:a). It is equally asymmetric. However, while
(30:C:a) possesses an equivalent symmetric form (30:B:a), this is not the
case for (30:C:b). The corresponding equivalent form for (30:C:b) is, as
we know, the conjunction of (30:B:a) and (30 :D) and (30 :D) is not at all
symmetric. I.e. (30 :D) is essentially altered if x(Rt/ is replaced by y(Rx.
So we see:
(30 :E) While (Rsatisfactoriness in unaffected by the replacement of
(R by (R 5 , it does not appear that this is the case for (Rsaturation.
Condition (30:B:a) (amounting to (Rsatisfactoriness) is the same for
(R and (R 5 . Condition (30 :D) for (R s is implied by the same for (R since (R 5
implies (R. So we see:
(30 :F) (R s saturation is implied by (Rsaturation.
The difference between these two types of saturation referred to above
is a real one: it is easy to give an explicit example of a set which is (R s sat
urated without being (Rsaturated. 1
Thus the study of saturation cannot be restricted to symmetric relations.
30.3.5. For symmetric relations (R the nature of saturation is simple
enough. In order to avoid extraneous complications we assume for this
section that x(Rx is always true. 2
1 E.g.: The first two examples of 30.3.3. are in the relation of (R 5 and (R to each other
(cf. footnote 2 on p. 267); their concepts of satisfactoriness are identical, but those of
saturation differ.
* This is clearly the case for our decisive example of 30.3.3.: x&y the negation of * H y
since never x H x.
EXACT FORM OF THE GENERAL DEFINITIONS 269
Now we prove:
(30 :G) Let (R be symmetric. Then the (Rsaturation of A is equiv
alent to its being maximal (Rsatisfactory. I.e. it is equivalent
to: A is (Rsatisfactory, but no proper superset of A is.
Proof: (Rsaturation means (Rsatisfactoriness (i.e. condition (30:B:a))
together with condition (30 :D). So we need only prove: If A is (Rsatis
factory, then (30 :D) is equivalent to the non(Rsatisfactoriness of all proper
supersets of A.
Sufficiency of (30 :D): If B D A is (Rsatisfactory, then any y in B, but
not in A, violates (30 :D). 1
Necessity of (30:D): Consider a y which violates (30:D). Then
B = A u (y) D A.
Now B is (Rsatisfactory, i.e. for #', y' in B, always x'<S(y'. Indeed, when
x', y' are both in A, this follows from the (Rsatisfactoriness of A. If x', y'
are both = y, we are merely asserting y&y. If one of z', y' is in A, and the
other = y, then the symmetry of (R allows us to assume x f in A, y' = y.
Now our assertion coincides with the negation of (30 :D).
If (R is not symmetric, we can only assert this:
(30 :H) (Rsaturation of A implies its being maximal (Rsatisfactory.
Proof: Maximal (Rsatisfactoriness is the same as maximal (Rsatisfactori
ness, cf. (30 :E). As (R 5 is symmetric, this amounts to (R saturation by
(30 :G). And this is a consequence of (Rsaturation by (30 :F).
The meaning of the result concerning a symmetric (R is the following:
Starting with any (Rsatisfactory set, this set must be increased as long as
possible, i.e. until any further increase would involve the loss of (Rsatis
factoriness. In this way in fine a maximal (Rsatisfactory set is obtained,
i.e. an (Rsaturated one by (30 :G). 2 This argument secures not only
the existence of (Rsaturated sets, but it also permits us to infer that every
(Rsatisfactory set can be extended to an (Rsaturated one.
1 Note that none of the extra restrictions on <H has been used so far.
8 This process of exhaustion is elementary i.e. it is over after a finite number of
steps when D is finite.
However, since the set of all imputations is usually infinite, the case of an infinite D
is important. When D is infinite, it is still heuristically plausible that the process of
exhaustion referred to can be carried out by making an infinite number of steps. This
process, known as transfinite induction, has been the object of extensive settheoretical
studies. It can be performed in a rigorous way which is dependent upon the socalled
axiom of choice.
The reader who is interested will find literature in F. Hausdorff, footnote 1, on p. 61.
Cf. also E. ZermelOj Beweis dass jede Menge wohlgeordnet werden kann. Math. Ann.
Vol. 59 (1904) p. 514ff. and Math. Ann. Vol. 65 (1908) p. 107ff.
These matters carry far from our subject and are not strictly necessary for our
purposes. We do not therefore dwell further upon them.
270 GENERAL THEORY: ZEROSUM nPERSONS
It should be noted that every subset of an (Rsaturated set is necessarily
(Rsatisf actory . l The above assertion means therefore that the converse
statement is also true.
30.3.6. It would be very convenient if the existence of solutions in our
theory could be established by such methods. The prima facie evidence,
however, is against this: the relation which we must use, a;(Ry negation of
the domination x s 1 y, cf. 30.3.3. is clearly asymmetrical. Hence we can
not apply (30 :G), but only (30 :H): maximal satisfactoriness is only neces
sary, but may not be sufficient for saturation, i.e. for being a solution.
That this difficulty is really deep seated can be seen as follows : If we could
replace the above (R by a symmetric one, this could not only be used to prove
the existence of solutions, but it would also prove in the same operation
the possibility of extending any (Rsatisfactory set of imputations to a solution
(cf. above). Now it is probable that every game possesses a solution, but
we shall see that there exist games in which certain satisfactory sets are
subsets of no solutions. 2 Thus the device of replacing (R by something
symmetric cannot work because this would be equally instrumental in prov
ing the first assertion, which is presumably true, and the second one, which
is certainly not true. 8
The reader may feel that this discussion is futile, since the relation x&y
which we must use ("not x H y") is de facto asymmetric. From a technical
point of view, however, it is conceivable that another relation x$y may be
found with the following properties: x$y is not equivalent to x&y; indeed,
is symmetric, while (R is not, butsaturation is equivalent to (Rsaturation.
In this case (Rsaturated sets would have to exist because they are the
saturated ones, and the gsatisfactory but not necessarily the (Rsatis
factory sets would always be extensible to gsaturated, i.e. (Rsaturated
ones. 4 This program of attack on the existence problem of solutions is not
as arbitrary as it may seem. Indeed, we shall see later a similar problem
which is solved in precisely this way (cf. 51.4.3.). All this is, however, for
the time being just a hope and a possibility.
30.3.7. In the last section we considered the question whether every
(Rsatisfactory set is a subset of an (Rsaturated set. We noted that for
the relation x&y which we must use ("not x H y" asymmetric) the answer
is in the negative. A brief comment upon this fact seems to be worth while.
If the answer had been in the affirmative it would have meant that any
set fulfilling (30:B:a) can be extended to one fulfilling (30:B:a) and (30 :D);
or, in the notations of 30.1.1., that any set of imputations fulfilling (30:5:a)
can be extended to one fulfilling (30:5:a) and (30:5:b).
1 Clearly property (30:B:a) is not lost when passing to a subset.
1 Cf. footnote 2 on p. 285.
8 This is a rather useful principle of the technical side of mathematics. The inappro
priateness of a method can be inferred from the fact that if it were workable at all it
would prove too much.
4 The point is that (R and Ssaturation are assumed to be equivalent to each other,
but <H and Ssatisfactoriness are not expected to be equivalent.
EXACT FORM OF THE GENERAL DEFINITIONS 271
It is instructive to restate this in the terminology of 4.6.2. Then the
statement becomes: Any standard of behavior which is free from inner
contradiction can be extended to one which is stable, i.e. not only free
from inner contradictions, but also able to upset all imputations outside
of it.
The observation in 30.3.6., according to which the above is not true in
general, is of some significance: in order that a set of rules of behavior
should be the nucleus (i.e. a subset) of a stable standard of behavior, it
may have to possess deeper structural properties than mere freedom from
inner contradictions. 1
30.4. Three Immediate Objectives
30.4.1. We have formulated the characteristics of a solution of an unre
stricted zerosum nperson game and can therefore begin the systematic
investigation of the properties of this concept. In conjunction with the
early stages of this investigation it seems appropriate to carry out three
special enquiries. These deal with the following special cases :
First: Throughout the discussions of 4., the idea recurred that the unso
phisticated concept of a solution would be that of an imputation, i.e. in
our present terminology, of a oneelement set V In 4.4.2. we saw specifi
cally that this would amount to finding a "first" element with respect to
domination. We saw in the subsequent parts of 4., as well as in our exact
discussions of 30.2., that it is mainly the intransitivity of our concept of
domination which defeats this endeavor and forces us to introduce sets of
imputations V as solutions.
It is, therefore, of interest now that we are in a position to do it to
give an exact answer to the following question: For which games do one
element solutions V exist? What else can be said about the solutions of
such games?
Second: The postulates of 30.1.1. were extracted from our experiences
with the zerosum threeperson game, in its essential case. It is, therefore,
of interest to reconsider this case in the light of the present, exact theory.
Of course, we know indeed this was a guiding principle throughout our
discussions that the solution which we obtained by the preliminary
methods of 22., 23., are solutions in the sense of our present postulates
too. Nevertheless it is desirable to verify this explicitly. The real point,
however, is to ascertain whether the present postulates do not ascribe to
those games further solutions as well. (We have already seen that it is not
inconceivable that there should exist several solutions for the same game.)
We shall therefore determine all solutions for the essential zerosum
threeperson games with results which are rather surprising, but, as we
shall see, not unreasonable.
1 If the relation S referred to at the end of 30.3.6. could be found, then this $ and
not (R would disclose which standards of behavior are such nuclei (i.e. subsets): the
Ssatisfactory ones.
Cf. the similar situation in 51.4., where the corresponding operation is performed
successfully.
272 GENERAL THEORY: ZEROSUM nPERSONS
30.4.2. These two items exhaust actually all zerosum games with n ^ 3.
We observed in the first remark of 27.5.2. that for n = 1, 2, these games are
inessential; so this, together with the inessential and the essential cases of
n = 3, takes care of everything in n ^ 3.
When this program is fulfilled we are left with the games n ^ 4 and
we know already that difficulties of a new kind begin with them (cf. the
allusions of footnote 1, p. 250, and the end of 27.5.3.).
30.4.3. Third: We introduced in 27.1. the concept of strategic equiva
lence. It appeared plausible that this relationship acts as its name
expresses: two games which are linked by it offer the same strategical
possibilities and inducements to form coalitions, etc. Now that we have
put the concept of a solution on an exact basis, this heuristic expectation
demands a rigorous proof.
These three questions will be answered in (31 :P) in 31.2.3. ; in 32.2. ; and
in (31:Q) in 31.3.3., respectively.
31. First Consequences
31.1. Convexity, Flatness, and Some Criteria for Domination
31.1.1. This section is devoted to proving various auxiliary results con
cerning solutions, and the other concepts which surround them, like ines
sentiality, essentiality, domination, effectivity. Since we have now put
all these notions on an exact basis, the possibility as well as the obligation
arises to be absolutely rigorous in establishing their properties. Some of
the deductions which follow may seem pedantic, and it may appear occasion
ally that a verbal explanation could have replaced the mathematical proof.
Such an approach, however, would be possible for only part of the results
of this section and, taking everything into account, the best plan seems to
be to go about the whole matter systematically with full mathematical rigor.
Some principles which play a considerable part in finding solutions are
(31 :A), (31 :B), (31 :C), (31 :F), (31 :G), (31 :H), which for certain coalitions
decide a priori that they must always, or never, be taken into consideration.
It seemed appropriate to accompany these principles with verbal explana
tions (in the sense indicated above) in addition to thqir formal proofs.
The other results possess varying interest of their own in different direc
tions. Together they give a first orientation of the circumstances which
surround our newly won concepts. The answers to the first and third ques
tions in 30.4. are given in (31 :P) and (31 :Q). Another question which
arose previously is settled in (31 :M).
31.1.2. Consider two imputations a , ft and assume that it has become
necessary to decide whether a H ft or not. This amounts to deciding
whether or not there exists a set S with the properties (30:4:a)(30:4:c) in
30.1.1. One of these, (30:4:c) is
> ft for all i in S.
FIRST CONSEQUENCES 273
We call this the main condition. The two others, (30:4:a), (30:4:b), are
the preliminary conditions.
Now one of the major technical difficulties in handling this concept of
domination i.e. in finding solutions V in the sense of 30.1.1. is the presence
of these preliminary conditions. It is highly desirable to be able, so to say,
to short circuit them, i.e. to discover criteria under which they are certainly
satisfied, and others under which they are certainly not satisfied. In look
ing for criteria of the latter type, it is by no means necessary that they should
involve nonfulfillment of the preliminary conditions for all imputations a
it suffices if they involve it for all those imputations a which fulfill the main
condition for some other imputation . (Cf. the proofs of (31 :A) or (31 :F),
where exactly this is utilized.)
We are interested in criteria of this nature in connection with the ques
tion of determining whether a given set V of imputations is a solution or not;
i.e. whether it fulfills the conditions (30:5:a), (30:5:b) the condition
(30:5:c) in 30.1.1. This amounts to determining which imputations
are dominated by elements of V
Criteria which dispose of the preliminary conditions summarily, in the
situation described above, are most desirable if they contain no reference
at all to ^V' 2 i.e. if they refer to 8 alone. (Cf. (31 :F), (31 :G), (31 :H).) But
even criteria which involve a may be desirable. (Cf. (31 :A).) We shall
consider even a criterion which deals with S and a by referring to the
*
behavior of another a '. (Of course, both in V. Cf. (31 :B).)
In order to cover all these possibilities, we introduce the following
terminology:
We consider proofs which aim at the determination of all imputations
/3 , which are dominated by elements of a given set of imputations V We
are thus concerned with the relations a H ft (a in V), and the question
whether a certain set S meets our preliminary requirements for such a rela
tion. We call S certainly necessary if we know (owing to the fulfillment by S
of some appropriate criterion) that S and a always meet the preliminary
conditions. We call a set S certainly unnecessary, if we know (again owing
to the fulfillment by S of some appropriate criterion, but which may now
involve other things too, cf. above) that the possibility that S and a meet
1 The point being that in our original definition of a ** ft the preliminary conditions
refer to 5 and to (but not to ft ). Specifically: (30:4:b) does.
1 The hypothetical element of V, which should dominate ft .
274 GENERAL THEORY: ZEROSUM nPERSONS
the preliminary conditions can be disregarded (because this never happens,
or for any other reason. Cf. also the qualifications made above).
These considerations may seem complicated, but they express a quite
natural technical standpoint. 1
We shall now give certain criteria of the certainly necessary and of the
certainly unnecessary characters. After each criterion we shall give a
verbal explanation of its content, which, it is hoped, will make our technique
clearer to the reader.
31.1.3. First, three elementary criteria:
(31 :A) S is certainly unnecessary for a given a (in V) if there exists
an t in S with a, = v((i)).
Explanation: A coalition need never be considered if it does not promise
to every participant (individually) definitely more than he can get for
himself.
Proof: If a fulfills the main condition for some imputation, then on > ft.
Since ft is an imputation, so ft ^ v((i)). Hence a< > v((i)). This con
tradicts cti = v((i)).
(31 :B) S is certainly unnecessary for a given a (in V) if it is certainly
necessary (and being considered) for another a ' (in V), such that
(31:1) ; ^ a, for all i in S.
Explanation: A coalition need not be considered if another one, which
has the same participants and promises every one (individually) at least
as much, is certain to receive consideration.
Proof: Let a and ft fulfill the main condition: a, > ft f or alii in S. Then
a ' and ft fulfill it also, by (31 :1), a( > ft for all i in S. Since S and a' are
1 For the reader who is familiar with formal logic we observe the following:
The attributes "certainly necessary" and " certainly unnecessary" are of a logical
nature. They are characterized by our ability to show (by any means whatever) that a
certain logical omission will invalidate no proof (of a certain kind). Specifically: Let a
> >
proof be concerned with the domination of a ft by an element a of V. Assume that this
> >
domination a H occurring with the help of the set S ( a in V) be under consideration.
Then this proof remains correct if we treat S and a (when they possess the attribute in
question) as if they always (or never) fulfilled the preliminary conditions, without our
actually investigating these conditions. In the mathematical proofs which we shall
carry out, this procedure will be applied frequently.
It can even happen that the same S will turn out (by the use of two different criteria)
to be both certainly necessary and certainly unnecessary (for the same a , e.g. for all of
them). This means merely that neither of the two omissions mentioned above spoils
*
any proof. This can happen, for instance, when a fulfills the main condition for no
imputation. (An example is obtained by combining (3 1 :F) and (3 1 :G) in the case described
in (31 ::b). Another is pointed out in footnote 1 on p. 310, and in footnote 1 on p. 431.)
FIRST CONSEQUENCES 275
being considered, they thus establish that ft is dominated by an element of
Vi and it is unnecessary to consider S and a .
(31:C) S is certainly unnecessary if another set TzS is certainly
necessary (and is being considered).
Explanation: A coalition need not be considered if a part of it is already
certain to receive consideration.
Proof: Let a (in V) and ft fulfill the main condition for S\ then they will
obviously fulfill it a fortiori for T S. Since T and a are being considered,
they thus establish that ft is dominated by an element of V and it is unneces
sary to consider S and a .
31.1.4. We now introduce some further criteria, and on a somewhat
broader basis than immediately needed. For this purpose we begin with
the following consideration:
For an arbitrary set S = (k\, , k p ) apply (25:5) in 25.4.1., with
Si = (ti), , S p = (fc p ). Then
v(S) ^ v((*0) + ' ' ' + v((fcp))
obtains, i.e.
(31:2) v(S) g; % v((fc)).
kinS
The excess of the lefthand side of (31 :2) over the right hand side expresses
the total advantage (for all participants together) inherent in the formation
of the coalition S. We call this the convexity of S. If this advantage
vanishes, i.e. if
(31:3) vGS) = % v((fc)),
kin 3
then we call S flat.
The following observations are immediate:
(31 :D) The following sets are always flat:
(31 :D :a) The empty set,
(3 1 :D :b) Every oneelement set,
(31 :D:c) Every subset of a flat set.
(31 :E) Any one of the following assertions is equivalent to the in
essentiality of the game:
(31:E:a) / = (!, , n) is flat,
(31 :E:b) There exists an S such that both S and S are flat,
(31:E:c) Every 8 is flat.
Proof: Ad (31:D:a), (31:D:b): For these sets (31:3) is obvious.
276 GENERAL THEORY: ZEROSUM nPERSONS
Ad (31 :D :c) : Assume SsT,T flat. Put R = T  S. Then by (31 :2)
(31:4) v(5)
k in S
(31:5) v(ft) X v((fc)).
tinR
Since T is flat, so by (30:3)
(31:6) v(T)  I v((*)).
*inT
As S n ft = 0, S u ft = T; therefore
v(S) + v() ^ v(T),
* in S * in R kinT
Hence (31 :6) implies
(31:7) v(S)+v()3 v((*))+
*inS fcinfl
Now comparison of (31:4), (31:5) and (31:7) shows that we must have
equality in all of them. But equality in (31:4) expresses just the flatness
of S.
Ad (31:E:a): The assertion coincides with (27 :B) in 27.4.1.
Ad (31:E:c): The assertion coincides with (27 :C) :a 27.4.2.
Ad (31:E:b): For an inessential game this is true for any S owing to
(31 :E:c). Conversely, if this is true for (at least one) S, then
v(5) = v((*)), v(S) = S v((*)) f
kin S k not in S
hence by addition (use (25:3:b) in 25.3.1.),
= t v((*)) f
/fei
i.e. the game is inessential by (31:E:a) or by (27 :B) in 27.4.1.
31.1.6. We are now in a position to prove:
(31:F) S is certainly unnecessary if it is flat.
Explanation: A coalition need not be considered if the game allows no
total advantage (for all its participants together) over what they would get
for themselves as independent players. 1
1 Observe that this is related to (31 : A), but not at all identical with it ! Indeed : (31 : A)
deals with the a t , i.e. with the promises made to each participant individually. (31:F)
deals with v(*S) (which determines flatness), i.e. with the possibilities of the game for all
participants together. But both criteria correlate these with the v((i)), i.e. with what
each player individually can get for himself.
FIRST CONSEQUENCES 277
> >
Proof: If a ^ ft with the help of this S'then we have: Necessarily
S 5* . cti> fa for all i in S and fa ^ v((i)), hence on > v((i)). So
2) < > v((t)). As S is flat, this means J) <* > v(/S). But S must
t in 3 t in 5 i in S
be effective, <* * v(fi>), which is a contradiction.
UnS
(31:G) S is certainly necessary if S is flat and 8 ^ 0.
Explanation: A coalition must be considered if it (is not empty and)
opposes one of the kind described in (31 :F).
>
Proof: The preliminary conditions are fulfilled for all imputations a .
Ad (30:4:a): S ^ Q was postulated.
Ad (30:4:b): Always ^ v((t)), so ^ v((i)). Since
t not in 3 t not in 3
n
5) ; = 0, the lefthand side is equal to ^ en. Since S is flat, the
tl
righthand side is equal to v( S), i.e. (use (25:3:b) in 25.3.1.) to v(S).
So  % a. ^ vGS), a< g v(S), i.e. S is effective.
t in S t in S
From (31:F), (31:G) we obtain by specialization:
(31:H) A pelement set is certainly necessary if p = n 1, and
certainly unnecessary if p = 0, 1, n.
Explanation : A coalition must be considered if it has only one opponent.
A coalition need not be considered, if it is empty or consists of one player
only (!), or if it has no opponents.
Proof: p = n 1: S has only one element, hence it is flat by (31:D)
above. The assertion now follows from (31:G).
p = 0, 1: Immediate by (31 :D) and (31 :F).
p = n: In this case necessarily >S = / = (l,*,n) rendering the
main condition unfulfillable. Indeed, that now requires a< > ft for all
n n _ ^ _ ^
i = 1, , n, hence a > ft. But as a , are imputations, both
ti ti
sides vanish, and this is a contradiction.
Thus those p for which the necessity of S is in doubt, are restricted to
p ?* 0, 1, n 1, n, i.e. to the interval
(31:8) 2 g p g n  2.
This interval plays a role only when n ^ 4. The situation discussed is
similar to that at the end of 27.5.2. and in 27.5.3., and the case n = 3
appears once more as one of particular simplicity.
31.2. The System of All Imputations. One element Solutions
31.2.1. We now discuss the structure of the system of all imputations.
(31 :1) For an inessential game there exists precisely one imputation:
278 GENERAL THEORY: ZEROSUM nPERSONS
(31:9) a = {i, ,), <* = v((i)) for t = 1, , n.
For an essential game there exist infinitely many imputations
an (n l)dimensional continuum but (31:9) is not one of
them.
Proof: Consider an imputation
7 = {01,  ,/M,
and put
ft = v((0) + * for i = 1, , n.
Then the characteristic conditions (30:1), (30:2) of 30.1.1. become
(31:10) i ^ for i = 1, , n.
(31:11)   v ((*))
ii *i
n
If T is inessential, then (27:B) in 27.4.1. gives  v((i)) = 0; so
t'i
(31:10), (31:11) amount to ci = = * n = 0, i.e. (31:9) is the unique
imputation.
n
If T is essential, the (27 :B) in 27.4.1. gives  v((t)) > 0, so (31:10),
ti
(31 :11) possess infinitely many solutions, which form an (n l)dimensional
continuum; 1 so the same is true for the imputations ft . But the a. of (31 :9)
is not one of them, because ei = = e n = now violate (31:11).
An immediate consequence:
(31 :J) A solution V is never empty.
Proof: I.e. the empty set is not a solution. Indeed: Consider any
imputation ft , there exists at least one by (31 :1). ft is not in and no
a in has a H ft . So violates (30:5:b) in 30.1. 1. 2
31.2.2. We have pointed out before that the simultaneous validity of
(31:12) ~2 * 7, 7 ** "^
is by no means impossible. 3 However:
(31:K) Never a H a.
1 There is only one equation: (31:11).
1 This argument may seem pedantic ; but if the conditions for the imputations con
flicted (i.e. without (31:1)), then V Q would be a solution.
3 The sets S of these two dominations would have to be disjunct. By (31 :H) these
S must have ^ 2 elements each. Hence (31 :12) can occur only when n ^ 4.
By a more detailed consideration even n 4 can be excluded; but for every n 5
(31:12) is really possible.
FIRST CONSEQUENCES 279
Proof: (30:4:a), (30:4:c) in 30.1.1. conflict for to = ~p.
(31:L) Given an essential game and an imputation a, there exists
an imputation ft such that ft H a but not a H ft. 1
Proof: Put
a = {,, , }.
Consider the equation
(31:13) <* = V ((0).
Since the game is essential, (31:1) excludes the proposition that (31:13) be
valid for all i = 1, , n. Let (31:13) fail, say forf = i . Since a is an
imputation, so a S v((i )), hence the failure of (31:13) me&ns a o > v((i )),
i.e.
(31:14) <. = v((i )) + e, a > 0.
Now define a vector
by
ft o = a io  * = v((i )),
ft = a H r for i ^ t' .
n 1
n n
These equations make it clear that ft ^ v((i)) 2 and that ft = ^ a = O. 8
il l
So is an imputation along with a .
We now prove the two assertions concerning a , ft .
ft H a : We have ft > a for all i ^ io, i.e. for all i in the set S = (I'D) .
>
This set has n 1 elements and it fulfills the main condition (for ft , a ),
hence (31 :H) gives H a .
Not a H ft : Assume that a H . Then a set S fulfilling the main con
dition must exist, which is not excluded by (31 :H). So S must have
g: 2 elements. So an i ^ i'o in S must exist. The former implies ft > a
>
1 Hence a ?* .
1 For t  to, we have actually 0, v((*' )). For i 5* i , we have 0, > a v((i)).
n n
3 X ft "* 2 a * k ecause the difference of ft and a. is c for one value of i (i t* ) and
i ti
for n 1 values of i (all i ** io).
n 
280 GENERAL THEORY: ZEROSUM nPERSONS
(by the construction of ft ) ; the latter implies a > ft (owing to the main
condition) and this is a contradiction.
31.2.3. We can draw the conclusions in which we were primarily inter
ested:

(31:M) An imputation a , for which never a ' H a , exists if and
only if the game is inessential. 1
Proof: Sufficiency: If the game is inessential, then it possesses by (31:1)
precisely one imputation a , and this has the desired property by (31:K).
Necessity: If the game is essential, and a. is an imputation, then a ' =
of (31 :L) gives a' = ft H a.
(31 :N) A game which possesses a oneelement solution 2 is necessarily
inessential.
Proof: Denote the oneelement solution in question by V = ( a ). This
V must satisfy (30:5:b) in 30.1.1. This means under our present circum
stances: Every ft other than a is dominated by a. I.e.:
> > >
ft j a implies a H ft .
Now if the game is essential, then (31 :L) provides a ft which violates this
condition.
(31 :0) An inessential game possesses precisely one solution V. This
is the oneelement set V = ( a ) with the a of (31 :1).
Proof: By (31:1) there exists precisely one imputation, the a of (31:1).
A solution V cannot be empty by (31 :J); hence the only possibility is
V = ( a ). Now V = ( a ) is indeed a solution, i.e. it fulfills (30:5 :a), (30:5:b)
in 30.1.1. : the former by (31 :K), the latter because a is the only imputation
by (31:1).
We can now answer completely the first question of 30.4.1.:
(31 :P) A game possesses a oneelement solution (cf . footnote 2 above)
if and only if it is inessential; and then it possesses no other
solutions.
Proof: This is just a combination of the results of (31 :N) and (31:0).
1 Cf. the considerations of (30:A:a) in 30.2.2., and particularly footnote 2 on p. 265.
1 We do not exclude the possibility that this game may possess other solutions as well,
which may or may not be oneelement sets. Actually this never happens (under our
present hypothesis), as the combination of the result of (31:N) with that of (31 :0) or
the result of (31 :P) shows. But the present consideration is independent of all this.
FIRST CONSEQUENCES 281
31.3. The Isomorphism Which Corresponds to Strategic Equivalence
31.3.1. Consider two games F and F' with the characteristic functions
v(S) and v'(S) which are strategically equivalent in the sense of 27.1.
We propose to prove that they are really equivalent from the point of view
of the concepts defined in 30.1.1. This will be done by establishing an
isomorphic correspondence between the entities which form the substratum
of the definitions of 30.1.1., i.e. the imputations. That is, we wish to
establish a onetoone correspondence, between the imputations of F and
those of F', which is isomorphic with respect to those concepts, i.e. which
carries effective sets, domination, and solutions for F into those for F'.
The considerations are merely an exact elaboration of the heuristic
indications of 27.1.1., hence the reader may find them unnecessary. How
ever, they give quite an instructive instance of an " isomorphism proof, "
and, besides, our previous remarks on the relationship of verbal and of
exact proofs may be applied again.
31.3.2. Let the strategic equivalence be given by a i, , aj in the sense
of (27:1), (27:2) in 27.1.1. Consider all imputations a = \a\ 9 , a n )
of F and all imputations a ' = {a' lf , a^jofF'. We look for a oneto
one correspondence
(31:15) ~^<=~^'
with the specified properties.
What (31:15) ought to be is easily guessed from the motivation at the
beginning of 27.1.1. We described there the passage from F to F' by adding
to the game a fixed payment of a to the player k. Applying this principle
to the imputations means
(31:16) a' k = a k + a k for * = !,, n. 1
Accordingly we define the correspondence (31:15) by the equations (31:16).
31.3.3. We now verify the asserted properties of (31:15), (31:16).
The imputations of F are mapped on the imputations of F': This means
by (30:1), (30:2) in 30.1.1., that
(31:17) a ^ v((t)) for i  1,   . n.
n
(31:18) , = 0,
'!
go over into
(31:17*) a( ^ v'((i)) for i = 1, , n,
(31:18*) 2:0.
t1
l lf we introduce the (fixed) vector a  lJ, , i then (31:16) may be
> >
written vectorially a ' a + a . I.e. it is a translation (by a ) in the vector space
of the imputations.
282 GENERAL THEORY: ZEROSUM nPERSONS
This is so for (31:17), (31:17*) because v'((t)) = v((t)) + ? (by (27:2) in
n
27.1.1.), and for (31:18), (31:18*) because J) a? = (by (27:1) id.).
t1
Effectivity for F goes over into effectivity for F': This means by (30:3)
in 30. 1.1., that
2) ^ v(S)
goes over into
2 J s vw.
tin
This becomes evident by comparison of (31:16) with (27:2).
Domination for F goes over into domination for F' : This means the same
thing for (30:4:a)(30:4:c) in 30.1.1. (30:4:a) is trivial; (30:4:b) is effec
tivity, which we settled: (30:4:c) asserts that a > ft goes over into a t  > 0J
which is obvious. The solutions of F are mapped on the solutions of F':
This means the same for (30:5:a), (30:5:b) (or (30:5:c)) in 30.1.1. These
conditions involve only domination, which we settled.
We restate these results:
(31:Q) If two zerosum games F and F' are strategically equivalent,
then there exists an isomorphism between their imputations
i.e. a onetoone mapping of those of F on those of F' which
leaves the concepts defined in 30.1.1. invariant.
32. Determination of all Solutions
of the Essential Zerosum Threeperson Game
$2.1. Formulation of the Mathematical Problem. The Graphical Method
32.1.1. We now turn to the second problem formulated in 30.4.1.:
The determination of all solutions for the essential zerosum threeperson
games.
We know that we may consider this game in the reduced form and that
we can choose 7 = I. 1 The characteristic function in this case is completely
determined as we have discussed before: 2
(32:1) v(fl) =
1
1
when S has
An imputation is a vector
elements.
a =
1 Cf. the discussion at the beginning of 29.1., or the references there given: the end of
27.1. and the second remark in 27.3.
1 Cf. the discussion at the beginning of 20.1., or the second case of 27.5.
ALL SOLUTIONS OF THE THREEPERSON GAME 283
whose three components must fulfill (30:1), (30:2) in 30.1.1.* These con
ditions now become (considering (30:1))
(32:2) ai 1, a , ^ 1, 8 1,
(32:3) on + a* + 3 = 0.
We know, from (31:1) in 31.2.1., that these ai, <* 2 , a 3 form only a two
dimensional continuum i.e. that they should be representable in the plane.
Indeed, (32:3) makes a very simple plane representation possible.
Figure 52.
a,  I
a* 
7
Figure 53.
32.1.2. For this purpose we take three axes in the plane, making angles
of 60 with each other. For any point of the plane we define on, 2, as
by directed perpendicular distances from these three axes. The whole
arrangement, and in particular the signs to be ascribed to the <*i, as, as
284
GENERAL THEORY: ZEROSUM nPERSONS
are given in Figure 52. It is easy to verify that for any point the algebraic
sum of these three perpendicular distances vanishes and that conversely
any triplet a = {i, 2, 8 } for which the sum vanishes, corresponds to a
point.
So the plane representation of Figure 52 expresses precisely the condition
(32:3). The remaining condition (32:2) is therefore the equivalent of a
restriction imposed upon the point a within the plane of Figure 52. This
restriction is obviously that it must lie on or within the triangle formed
by the three lines i = 1, 2 = 1, a = 1. Figure 53 illustrates this.
Thus the shaded area, to be called the fundamental triangle, represents
the a which fulfill (32:2), (32:3) i.e. all imputations.
32.1.3. We next express the relationship of domination in this graphical
representation. As n = 3, we know from (31 :H) (cf. also the discussion of
Figure 54.
Figure 55.
(31:8) at the end of 31.1.5.) that among the subsets 8 of / = (1, 2, 3)
those of two elements are certainly necessary, and all others certainly
unnecessary. I.e., the sets which we must consider in our determination
of all solutions V are precisely these :
Thus for
domination
means that
(1,2); (1,3); (2,3).
a = {ai, a 2 , as}, ft = {ft, ft, ft}
a H B
(32:4) Either i > ft, a 2 > ft; or ai > fa, a* > ft; or 2 > ft,
as > ft.
Diagrammatically: a dominates the points in the shaded areas, and no
others, 1 in Figure 54.
1 In particular, no points on the boundary lines of these areas.
ALL SOLUTIONS OF THE THREEPERSON GAME 285
>
Thus the point a dominates three of the six sextants indicated in
Figure 55 (namely A, C, B). From this one concludes easily that a is
dominated by the three other sextants (namely B, D, F). So the only
points which do not dominate a and are not dominated by it, lie on the
three lines (i.e. six halflines) which separate these sextants. I.e. :
(32:5) If neither of a. , ft dominates the other, then the direction
>
from a. to ft is parallel to one of the sides of the fundamental
triangle.
32.1.4. Now the systematic search for all solutions can begin.
Consider a solution V, i.e. a set in the fundamental triangle which
fulfills the conditions (30:5:a), (30:5:b) of 30.1.1. In what follows we shall
use these conditions currently, without referring to them explicitly on each
occasion.
Since the game is essential, V must contain at least two points 1 say
a and ft . By (32:5) the direction from a to ft is parallel to one of the
sides of the fundamental triangle; and by a permutation of the numbers of the
players 1,2,3 we can arrange this to be the side i = 1, i.e. the horizontal.
*
So a , ft lie on a horizontal line I. Now two possibilities arise and we treat
them separately:
(a) Every point of V lies on I.
(b) Some points of V do not lie on L
32.2. Determination of All Solutions
32.2.1. We consider (b) first. Any point not on I must fulfill (32:5) with
>
respect to both a and ft , i.e. it must be the third vertex of one of the two
>
equilateral triangles with the base a , ft : one of the two points a ', a " of
>
Figure 56. So either a ' or a " belongs to V. Any point of V which differs
from a, ft and a 'or a " must again fulfill (32:5), but now with respect to all
>
three points a , ft and a ' or a ". This, however, is impossible, as an
inspection of Figure 56 immediately reveals. So V consists of precisely these
three points, i.e. of the three vertices of a triangle which is in the position
of triangle / or triangle // of Figure 57. Comparison of Figure 57 with
Figures 54 or 55 shows that the vertices of triangle / leave the interior of
this triangle uridominated. This rules out I. 2
1 This is also directly observable in Fig. 54.
2 This provides the example referred to in 30.3.6.: The three vertices of triangle / do
not dominate each other, i.e. they form a satisfactory set in the sense of loc. cit. They are
nevertheless unsuitable as a subset of a solution.
286 GENERAL THEORY: ZEROSUM nPERSONS
Figure 57.
Fundamental Triangle 
Figure 58.
danienUl Triangle 
Figure 59.
ALL SOLUTIONS OF THE THREEPERSON GAME 287
The same comparison shows that the vertices of triangle // leave
undominated the dotted areas indicated in Figure 58. Hence triangle //
must be placed in such a manner in the fundamental triangle that these
dotted areas fall entirely outside the fundamental triangle. This means
that the three vertices of II must lie on the three sides of the fundamental
triangle, as indicated in Figure 59. Thus these three vertices are the
middle points of the three sides of the fundamental triangle.
Comparison of Figure 59 with Figure 54 or Figure 55 shows that this
set V is indeed a solution. One verifies immediately that these three
middle points are the points (vectors)
(32:6) {!,*,*}, {*,!, 41, It,*, 1,
i.e. that this solution V is the set of Figure 51.
Fundament*!
Triangle
Figure 60.
32.2.2. Let us now consider (a) in 32.1.4. In this case all of V lies on the
horizontal line L By (32 :5) no two points of I dominate each other, so that
every point of / is undominated by V. Hence every point of I (in the funda
mental triangle) must belong to V. I.e., V is precisely that part of /
which is in the fundamental triangle. So the elements a = {<*i, 2, s}
of V are characterized by an equation
(32:7) i c.
Diagrammatically: Figure 60.
Comparison of Figure 60 with Figures 54 or 55 shows that the line I
leaves the dotted area indicated on Figure 60 undominated. Hence the line
I must be placed in such a manner in the fundamental triangle that the
dotted area falls entirely outside the fundamental triangle. This means
that I must lie below the middle points of those two sides of the fundamental
288 GENERAL THEORY: ZEROSUM nPERSONS
triangle which it intersects. 1 In the terminology of (32:7): c < . On the
other hand, c ^ 1 is necessary to make I intersect the fundamental
triangle at all. So we have:
(32:8) 1 S c <i
Comparison of Figure 60 with Figures 54 or 55 shows that under these
conditions 2 the set V i.e. I is indeed a solution.
But the form (32:7) of this solution was brought about by a suitable
permutation of the numbers 1,2,3. Hence we have two further solutions,
characterized by
(32:7*) a, = c,
and characterized by
(32:7**) 3 = c,
always with (32:8)
32.2.3. Summing up:
This is a complete list of solutions:
(32:A) For every c which fulfills (32:8): The three sets (32:7)
(32:7*), (32:7**).
(32 :B) The set (32:6).
33. Conclusions
33.1. The Multiplicity of Solutions. Discrimination and Its Meaning
33.1.1. The result of 32. calls for careful consideration and comment.
We have determined all solutions of the essential zerosum threeperson
game. In 29.1., before the rigorous definitions of 30.1. were formulated,
we had already decided which solution we wanted; and this solution reap
pears now as (32 :B). But we found other solutions besides: the (32: A),
which are infinitely many sets, each one of them an infinite set of imputa
tions itself. What do these supernumerary solutions stand for?
Consider, e.g., the form (32:7) of (32: A). This gives a solution for
every c of (32:8) consisting of all imputations a = {i, 2 , 3 } which fulfill
(32:7), i.e. i = c. Besides this, they must fulfill only the requirements,
1 The limiting position of J, going through the middle points themselves, must be
excluded. The reason is that in this position the vertex of the dotted area would lie on
the fundamental triangle, and this is inadmissable since that point too is undominated
by V, i.e. by I.
Observe that the corresponding prohibition did not occur in case (b), i.e. for the
dotted areas of Figure 58. Their vertices too were undominated by V, but they belong
to V. In our present position of V, on the other hand, the vertex under consideration
does not belong to V, i.e. to I.
This exclusion of the limiting position causes the < and not the ^ in the
inequality which follows.
* (32:8), i.e. I intersects the fundamental triangle, but below its middle.
CONCLUSIONS 289
(30:1), (30:2) of 30.1.1. i.e. (32:2), (32:3) of 32.1.1. In other words:
Our solution consists of all
(33:1) "^ = {c, a, c a}, 1 g a 1  c.
The interpretation of this solution consists manifestly of this: One
of the players (in this case 1) is being discriminated against by the two others
(in this case 2,3). They assign to him the amount which he gets, c. This
amount is the same for all imputations of the solution, i.e. of the accepted
standard of behavior. The place in society of player 1 is prescribed by the
two other players; he is excluded from all negotiations that may lead to
coalitions. Such negotiations do go on, however, between the two other
players: the distribution of their share, c, depends entirely upon their
bargaining abilities. The solution, i.e. the accepted standard of behavior,
imposes absolutely no restrictions upon the way in which this share is
divided between them, expressed by a, c a. 1 This is not surprising.
Since the excluded player is absolutely "tabu," the threat of the partner's
desertion is removed from each participant of the coalition. There is no
way of determining any definite division of the spoils. 2 ' 3
Incidentally: It is quite instructive to see how our concept of a solution
as a set of imputations is able to take care of this situation also.
33.1.2. There is more that should be said about this "discrimination"
against a player.
First, it is not done in an entirely arbitrary manner. The c, in which
discrimination finds its quantitative expression, is restricted by (32:8) in
32.2.2. Now part of (32:8), c ^ 1, is clear enough in its meaning, but the
significance of the other part c < 4 is considerably more recondite (cf.
however, below). It all comes back to this: Even an arbitrary system of
discriminations can be compatible with a stable standard of behavior i.e.
order of society but it may have to fulfill certain quantitative conditions,
in order that it may not impair that stability.
Second, the discrimination need not be clearly disadvantageous to the
player who is affected. It cannot be clearly advantageous, i.e. his fixed
value c cannot be equal to or better than the best the others may expect.
This would mean, by (33:1), that c ^ 1 c, i.e. c ^ i, which is exactly
what (32:8) forbids. But it would be clearly disadvantageous only for
c = 1 ; and this is a possible value for c (by (32:8)), but not the only one.
c = 1 means that the player is not only excluded, but also exploited to
1 Except that both must be <z 1 i.e. what the player can get for himself, without
any outside help.
a, c & ^ 1 is, of course, the 1 ^ a ^ 1 c of (33:1).
2 Cf. the discussions at the end of 25.2. Observe that the arguments which we
adduced there to motivate the primate of v(S) have ceased to operate in this particular
case and v(S) nevertheless determines the solutions!
8 Observe that due to (32:8) in 32.2.2., the "spoils", i.e. the amount c, can be both
positive and negative.
4 And that is excluded in c < J, but not in c ^ 1.
290 GENERAL THEORY: ZEROSUM nPERSONS
100 per cent. The remaining c (of (32:8)) with 1 < c < i correspond to
gradually less and less disadvantageous forms of segregation.
33.1.3. It seems remarkable that our concept of a solution is able to
express all these nuances of nondiscriminatory (32 :B), and discriminatory
(32: A), standards of behavior the latter both in their 100 per cent injurious
form, c = 1, and in a continuous family of less and less injurious ones
1 < c < i. It is particularly significant that we did not look for any
such thing the heuristic discussions of 29.1 were certainly not in this spirit
but we were nevertheless forced to these conclusions by the rigorous theory
itself. And these situations arose even in the extremely simple framework
of the zerosum threeperson game!
For n ^ 4 we must expect a much greater wealth of possibilities for
all sorts of schemes of discrimination, prejudices, privileges, etc. Besides
these, we must always look out for the analogues of the solution (32:B), i.e.
the nondiscriminating " objective " solutions. But we shall see that the con
ditions are far from simple. And we shall also see that it is precisely the
investigation of the discriminatory "inobjective" solutions which leads to a
proper understanding of the general nonzerosum games and thence to
application to economics.
33.2. Statics and Dynamics
33.2. At this point it may be advantageous to recall the discussions of
4.8.2. concerning statics and dynamics. What we said then applies now;
indeed it was really meant for the phase which our theory has now reached.
In 29.2. and in the places referred to there, we considered the nego
tiations, expectations and fears which precede the formation of a coalition
and which determine its conditions. These were all of the quasidynamic
type described in 4.8.2. The same applies to our discussion in 4.6. and
again in 30.2., of how various imputations may or may not dominate each
other depending on their relationship to a solution; i.e., how the conducts
approved by an established standard of behavior do not conflict with each
other, but can be used to discredit the nonapproved varieties.
The excuse, and the necessity, for using such considerations in a static
theory were set forth on that occasion. Thus it is not necessary to repeat
them now.
CHAPTER VII
ZEROSUM FOURPERSON GAMES
34. Preliminary Survey
34.1. General Viewpoints
34.1. We are now in possession of a general theory of the zerosum
nperson game, but the state of our information is still far from satisfactory.
Save for the formal statement of the definitions we have penetrated but
little below the surface. The applications which we have made i.e. the
special cases in which we have succeeded in determining our solutions can
be rated only as providing a preliminary orientation. As pointed out in
30.4.2., these applications cover all cases n ^ 3, but we know from our past
discussions how little this is in comparison with the general problem. Thus
we must turn to games with n ^ 4 and it is only here that the full complexity
of the interplay of coalitions can be expected to emerge. A deeper insight
into the nature of our problems will be achieved only when we have mastered
the mechanisms which govern these phenomena.
The present chapter is devoted to zerosum fourperson games. Our
information about these still presents many lacunae. This compels an
inexhaustive and chiefly casuistic treatment, with its obvious shortcomings. 1
But even this imperfect exposition will disclose various essential qualitative
properties of the general theory which could not be encountered previously,
(for n g 3). Indeed, it will be noted that the interpretation of the mathe
matical results of this phase leads quite naturally to specific "social"
concepts and formulations.
34.2. Formalism of the Essential Zero sum Fourperson Game
34.2.1. In order to acquire an idea of the nature of the zerosum four
person games we begin with a purely descriptive classification.
Let therefore an arbitrary zerosum fourperson game F be given, which
we may as well consider in its reduced form: and also let us choose y = I. 2
These assertions correspond, as we know from (27:7*) and (27:7**) in 27.2.,
to the following statements concerning the characteristic functions:
(34:1) v(5) =
1
1
when S has
elements.
1 E.g., a considerable emphasis on heuristic devices.
2 Cf. 27.1.4. and 27.3.2. The reader will note the analogy between this discussion
and that of 29.1.2. concerning the zerosum threeperson game. About this more will
be said later.
291
292 ZEROSUM FOURPERSON GAMES
Thus only the v(/S) of the twoelement sets S remain undetermined by
these normalizations. We therefore direct our attention to these sets.
The set I = (1,2,3,4) of all players possesses six twoelement subsets S:
(1,2), (1,3), (1,4), (2,3), (2,4), (3,4).
Now the v(S) of these sets cannot be treated as independent variables,
because each one of these S has another one of the same sequence as its
complement. Specifically: the first and the last, the second and the fifth,
the third and the fourth, are complements of each other respectively.
Hence their v(S) are the negatives of each other. It is also to be remem
bered that by the inequality (27:7) in 27.2. (with n = 4, p = 2) all these
v(S) are ^2, ^ 2. Hence if we put
)) = 2xi,
(34:2) v((2,4)) = 2x 2 ,
1 v((3,4)) = 2x 3 ,
then we have
f v((2,3)) = 2*!,
(34:3) v((l,3)) = 2x 2 ,
lv((l,2)) = 2*3,
and in addition
(34:4) 1 ^ xi, x 2 , x, ^ 1.
Conversely: If any three numbers Xi, x 2 , x 3 fulfilling (34:4) are given,
then we can define a function v(S) (for all subsets S of I = (1,2,3,4)) by
(34:l)(34:3), but we must show that this v(S) is the characteristic function
of a game. By 26.1. this means that our present v(S) fulfills the conditions
(25:3:a)(25:3:c) of 25.3.1. Of these, (25:3:a) and (25:3:b) are obviously
fulfilled, so only (25:3:c) remains. By 25.4.2. this means showing that
v(Si) + v(S 2 ) + v(S 3 ) ^ if Si, S 2 , Ss are a decomposition of /.
(Cf. also (25:6) in 25.4.1.) If any of the sets Si, S 2 , S 8 is empty, the two
others are complements and so we even have equality by (25:3:a), (25:3:b)
in 25.3.1. So we may assume that none of the sets Si, S 2 , S 8 is empty.
Since four elements are available altogether, one of these sets, say Si = S,
must have two elements, while the two others are oneelement sets. Thus
our inequality becomes
v(S) 2^0, i.e. v(S) ^ 2.
If we express this for all twoelement sets S, then (34:2), (34:3) transform
the inequality into
2*i g 2, 2x 2 ^ 2, 2x 3 ^ 2,
2*! g 2, 2*, g 2, 2x, g 2,
which is equivalent to the assumed (34 :4) . Thus we have demonstrated :
PRELIMINARY SURVEY
293
(34: A) The essential zerosum fourperson games (in their reduced
form with the choice 7 = 1) correspond exactly to the triplets
of numbers xi, a? 2 , z 8 fulfilling the inequalities (34:4). The
correspondence between such a game, i.e. its characteristic
function, and its x\, z 2 , x* is given by the equations (34:l)(34:3). 1
34.2.2. The above representation of the essential zerosum fourperson
games by triplets of numbers x\, z 2 , x 8 can be illustrated by a simple geo
metrical picture. We can view the numbers x\, x$, x 3 as the Cartesian
coordinates of a point. 2 In this case the inequalities (34:4) describe a part
Figure 61.
of space which exactly fills a cube Q. This cube is centered at the origin
of the coordinates, and its edges are of length 2 because its six faces are the
six planes
Xi = 1, Z 2 = 1, X 3 = 1,
as shown in Figure 61.
Thus each essential zerosum fourperson game F is represented by
precisely one point in the interior or on the surface of this cube, and vice
versa. It is quite useful to view these games in this manner and to try to
correlate their peculiarities with the geometrical conditions in Q. It
will be particularly instructive to identify those games which correspond to
definite significant points of Q.
1 The reader may now compare our result with that of 29.1.2. concerning the zerosum
threeperson games. It will be noted how the variety of possibilities has increased.
2 We may also consider these numbers as the components of a vector in L$ in the
sense of 16.1.2. et seq. This aspect will sometimes be the more convenient, as in foot
note 1 on p. 304.
294 ZEROSUM FOURPERSON GAMES
But even before carryitig out this program, we propose to consider
certain questions of symmetry. We want to uncover the connections
between the permutations of the players 1,2,3,4, and the geometrical trans
formations (motions) of Q. Indeed: by 28.1. the former correspond to the
symmetries of the game T, while the latter obviously express the symmetries
of the geometrical object.
34.3. Permutations of the Players
34.3.1. In evolving the geometrical representation of the essential
zerosum fourperson game we had to perform an arbitrary operation,
i.e. one which destroyed part of the symmetry of the original situation.
Indeed, in describing the v(S) of the twoelement sets S, we had to single
out three from among these sets (which are six in number), in order to
introduce the coordinates x\ y x 2 , x 3 . We actually did this in (34:2), (34:3)
by assigning the player 4 a particular role and then setting up a correspond
ence between the players 1,2,3 and the quantities Xi, x 2 , x 3 respectively
(cf. (34:2)). Thus a permutation of the players 1,2,3 will induce the same
permutation of the coordinates Xi, x 2 , x 3 and so far the arrangement is
symmetric. But these are only six permutations from among the total of
24 permutations of the players 1,2,3,4.* So a permutation which replaces
the player 4 by another one is not accounted for in this way.
34.3.2. Let us consider such a permutation. For reasons which will
appear immediately, consider the permutation A, which interchanges the
players 1 and 4 with each other and also the players 2 and 3. 2 A look at the
equations (34:2), (34:3) suffices to show that this permutation leaves Xi
invariant, while it replaces x 2 , x 3 by x 2 , x 3 . Similarly one verifies:
The permutation 5, which interchanges 2 and 4, and also 1 and 3, leaves x 2
invariant and replaces Xi, x 3 by Xi, x 3 . The permutation C, which
interchanges 3 and 4 and also 1 and 2, leaves x s invariant and replaces Xi, x 2
by Xi, x 2 .
Thus each one of the three permutations A, B, C affects the variables
Xi, x 2 , x 8 only as far as their signs are concerned, each changing two signs
and leaving the third invariant.
Since they also carry 4 into 1,2,3, respectively, they produce all permuta
tions of the players 1,2,3,4, if combined with the six permutations of the
players 1,2,3. Now we have seen that the latter correspond to the six
permutations of Xi, x 2 , x 3 (without changes in sign). Consequently the
24 permutations of 1,2,3,4 correspond to the six permutations of Xi, x 2 , x 3 ,
each one in conjunction with no change of sign or with two changes of
sign. 8
1 Cf. 28.1.1., following the definitions (28:A:a), (28:A:b).
2 With the notations of 29.1.:
1,2,3,4\ B _ /1,2,3,4\ _ /1,2,3,4
B ~ c
3 These sign changes are 1 + 3 4 possibilities in each case, so we have 6 X 4 24
operations on Xi, x^ x\ to represent the 24 permutations of 1,2,3,4, as it should be.
SPECIAL POINTS IN THE CUBE Q
295
34.3.3. We may also state this as follows: If we consider all movements
in space which carry the cube Q into itself, it is easily verified that they
consist of the permutations of the coordinate axes xi, x 2 , x* in combination
with any reflections on the coordinate planes (i.e. the planes x*, x 9 ; x\ t Xf ;
#1, xj). Mathematically these are the permutations of x\ y X*, x\ in combina
tion with any changes of sign among the xi, X2, x 8 . These are 48 possibili
ties. 1 Only half of these, the 24 for which the number of sign changes is
even (i.e. or 2), correspond to the permutations of the players.
IV
Figure 62.
It is easily verified that these are precisely the 24 which not only carry
the cube Q into itself, but also the tetrahedron I, V, VI, VII, as indicated in
Figure 62. One may also characterize such a movement by observing that
it always carries a vertex of Q into a vertex ; and equally a vertex o into a
vertex o, but never a into a o. 2
We shall now obtain a much more immediate interpretation of these
statements by describing directly the games which correspond to specific
points of the cube Q: to the vertices or o, to the center (the origin in Figure
61), and to the main diagonals of Q.
35. Discussion of Some Special Points in the Cube Q
35.1. The Corner / (and 7, VI, VII)
35.1.1. We begin by determining the games which correspond to the
four corners : 7, V, VI, VII. We have seen that they arise from each other
by suitable permutations of the players 1,2,3,4. Therefore it suffices to
consider one of them, say /.
1 For each variable *i, x tt x\ there are two possibilities: change or no change. This
gives altogether 2 1  8 possibilities. Combination with the six permutations of Xi, t, x*
yields 8 X 6 48 operations.
* This group of motions is well known in group theory and particularly in crystallog
raphy, but we shall not elaborate the point further.
296 ZEROSUM FOURPERSON GAMES
The point 7 corresponds to the values 1,1,1 of the coordinates x\, x z ,
Thus the characteristic function v(>S) of this game is:
(35:1) v(S)
1
2
2
1
when S has
1
2 (and 4 belongs to S)
elements
2 (and 4 does not belong to S)
3
4
(Verification is immediate with the help of (34:1), (34:2), (34:3) in 34.2.1.)
Instead of applying the mathematical theory of Chapter VI to this game, let
us first see whether it does not allow an immediate intuitive interpretation.
Observe first that a player who is left to himself loses the amount 1.
This is manifestly the worst thing that can ever happen to him since he
can protect himself against further losses without anybody else's help. 1
Thus we may consider a player who gets this amount 1 as completely
defeated. A coalition of two pla3 r ers may be considered as defeated if it
gets the amount 2, since then each player in it must necessarily get I. 2  3
In this game the coalition of any two players is defeated in this sense if it
does not comprise player 4.
Let us now pass to the complementary sets. If a coalition is defeated
in the above sense, it is reasonable to consider the complementary set
as a winning coalition. Therefore the twoelement sets which contain the
player 4 must be rated as winning coalitions. Also since any player who
remains isolated must be rated as defeated, threeperson coalitions always
win. This is immaterial for those threeelement coalitions which contain
the player 4, since in these coalitions two members are winning already
if the player 4 is among them. But it is essential that 1,2,3 be a winning
coalition, since all its proper subsets are defeated. 4
1 This view of the matter is corroborated by our results concerning the threeperson
game in 23. and 32.2., and more fundamentally by our definition of the imputation in
30.1.1., particularly condition (30:1).
2 Since neither he nor his partner need accept less than 1, and they have together
2, this is the only way in which they can split.
8 In the terminology of 31.1.4.: this coalition is flat. There is of course no gain, and
therefore no possible motive for two players to form such a coalition. But if it happens
that the two other players have combined and show no desire to acquire a third ally, we
may treat the remaining two as a coalition even in this case.
4 We warn the reader that, although we have used the words " defeated" and " win
ning" almost as termini technici, this is not our intention. These concepts are, indeed,
very well suited for an exact treatment. The "defeated" and "winning" coalitions
actually coincide with the sets S considered in (31 :F) and in (31 :G) in 31.1.5.; those for
which S is flat or S is flat, respectively. But we shall consider this question in such
a way only in Chap. X.
For the moment our considerations are absolutely heuristic and ought to be taken
in the same spirit as the heuristic discussions of the zerosum threeperson game in 21., 22.
The only difference is that we shall be considerably briefer now, since our experience and
routine have grown substantially in the discussion.
As we now possess an exact theory of solutions for games already, we are under
SPECIAL POINTS IN THE CUBE Q 297
35.1.2. So it is plausible to view this as a struggle for participation in
any one of the various possible coalitions:
(35:2) (1,4), (2,4), (3,4), (1,2,3),
where the amounts obtainable for these coalitions are:
(35:3) v((l,4)) = v((2,4)) = v((3,4)) = 2, v((l,2,3)) = 1.
Observe that this is very similar to the situation which we found in
the essential zerosum threeperson game, where the winning coalitions
were:
(35:2*) (1,2), (1,3), (2,3),
and the amounts obtainable for these coalitions:
(35:3*) v((l,2)) = v((l,3)) = v((2,3)) = 1.
In the threeperson game we determined the distribution of the pro
ceeds (35:3*) among the winners by assuming: A player in a winning coali
tion should get the same amount no matter which is the winning coalition.
Denoting these amounts for the players 1,2,3 by a, $, y respectively, (35:3*)
gives
(35:4*) a + = a + 7 =:0 + 7 = l
from which follows
(35:5*) a = ft = y = i
These were indeed the values which those considerations yielded.
Let us assume the same principle in our present fourperson game.
Denote by a, 0, 7, 5, respectively, the amount that each player 1,2,3,4 gets
if he succeeds in participating in a winning coalition. Then (35:3) gives
(35:4) a + 6 = + 6 = 7 + $ = 2, a + + 7 = 1,
from which follows
(35:5) = = 7 = i * = i
All the heuristic arguments used in 21., 22., for the threeperson game could
be repeated. 1
35.1.3. Summing up:
(35 :A) This is a game in which the player 4 is in a specially favored
position to win: any one ally suffices for him to form a winning
coalition. Without his cooperation, on the other hand, three
players must combine. This advantage also expresses itself in
obligation to follow up this preliminary heuristic analysis by an exact analysis which
is based rigorously on the mathematical theory. We shall come to this. (Cf. loc. cit.
above, and also the beginning of 36.2.3.)
1 Of course, without making this thereby a rigorous discussion on the basis of 30.1.
298
ZEROSUM FOURPERSON GAMES
the amounts which each player 1,2,3,4 should get when he is
among the winners if our above heuristic deduction can be
trusted. These amounts are i, , i,  respectively. It is
to be noted that the advantage of player 4 refers to the case of
victory only; when defeated, all players are in the same position
(i.e. get 1).
The last mentioned circumstance is, of course, due to our normaliza
tion by reduction. Independently of any normalization, however, this game
exhibits the following trait: One player 's quantitative advantage over
another may, when both win, differ from what it is when both lose.
This cannot happen in a threeperson game, as is apparent from the
formulation which concludes 22.3.4. Thus we get a first indication of an
important new factor that emerges when the number of participants reaches
four.
35.1.4. One last remark seems appropriate. In this game player 4's
strategic advantage consisted in the fact that he needed only one ally
for victory, whereas without him a total of three partners was necessary.
One might try to pass to an even more extreme form by constructing a game
in which every coalition that does not contain player 4 is defeated. It is
essential to visualize that this is not so, or rather that such an advantage
is no longer of a strategic nature. Indeed in such a game
1
2
3
if 8 has
hence
if S has
elements and 4 does not belong to S,
1
2
elements and 4 belongs to S.
o
4
This is not reduced, as
v((I)) = v((2)) v((3)) = 1,
v((4)) = 3.
If we apply the reduction process of 27.1.4. to this v(S) we find that its
reduced form is
0.
i.e. the game is inessential. (This could have been shown directly by (27 :B)
in 27.4.) Thus this game has a uniquely determined value for each player
1,2,3,4: 1, 1, 1, 3, respectively.
In other words: Player 4's advantage in this game is one of a fixed
payment (i.e. of cash), and not one of strategic possibilities. The former
is, of course, more definite and tangible than the latter, but of less theoretical
interest since it can be removed by our process of reduction.
SPECIAL POINTS IN THE CUBE Q 299
35.1.5. We observed at the beginning of this section that the corners
V, VI, VII differ from J only by permutations of the players. It is easily
verified that the special role of player 4 in / is enjoyed by the players 1,2,3,
in V, VI, VII, respectively.
35.2. The Corner VIII (and //, ///, 77). The Threeperson Game and a "Dummy 91
35.2.1. We next consider the games which correspond to the four corners
o : //, ///, IV, VIII. As they arise from each other by suitable permutations
of the players 1,2,3,4, it suffices to consider one of them, say VIIL
The point VIII corresponds to the values 1, 1, 1 of the coordinates
xi, x 2 , x*. Thus the characteristic function v(S) of this game is:
(35:6) v(S) =
1
2
2
1
when S has
1
2 (and 4 belongs to S)
elements
2 (and 4 does not belong to 5)
3
4
(Verification is immediate with the help of (34:1), (34:2), (34:3) in 34.2.1.)
Again, instead of applying to this game the mathematical theory of Chapter
VI, let us first see whether it does not allow an immediate intuitive
interpretation.
The important feature of this game is that the inequality (25:3:c) in
25.3. becomes an equality, i.e. :
(35:7) v(S u T) = v(S) + v(T) if S n T = 0,
when T = (4). That is: If S represents a coalition which does not contain
the player 4, then the addition of 4 to this coalition is of no advantage;
i.e. it does not affect the strategic situation of this coalition nor of its
opponents in any way. This is clearly the meaning of the additivity
expressed by (35 :7). 1
35.2.2. This circumstance suggests the following conclusion, which
is of course purely heuristic. 2 Since the accession of player 4 to any
1 Note that the indifference in acquiring the cooperation of 4 is expressed by (35:7),
and not by
v(S U T)  vGS).
That is, a player is "indifferent" as a partner, not if his accession does not alter the value
of a coalition but if he brings into the coalition exactly the amount which and no more
than he is worth outside.
This remark may seem trivial; but there exists a certain danger of misunderstanding,
particularly in nonreduced games where v((4)) > 0, i.e. where the accession of 4
(although strategically indifferent!) actually increases the value of a coalition.
Observe also that the indifference of S and T (4) to each other is a strictly recip
rocal relationship.
1 We shall later undertake exact discussion on the basis of 30.1. At that time it will
be found also that all these games are special cases of more general classes of some
importance. (Cf. Chap. IX, particularly 41.2.)
300 ZEROSUM FOURPERSON GAMES
coalition appears to he a matter of complete indifference to both sides, it
seems plausible to assume that player 4 has no part in the transactions that
constitute the strategy of the game. lie is isolated from the others and
the amount which he can get for himself v(/S) = 1 is the actual value
of the game for him. The other players 1,2,3, on the other hand, play
the game strictly among themselves; hence they are playing a three
person game. The values of the original characteristic function v(S)
which describes the original threeperson game are :
=v((3)) = 1,
(35:6*) v((l,2)) = v((l,3)) = v((2,3)) = 2,
v((l,2,3)) = 1,
/' = (1,2,3) is now the set
of all players.
(Verify this from (35:6).)
At first sight this threeperson game represents the oddity that v(/')
(/' is now the set of all players!) is not zero. This, however, is perfectly
reasonable: by eliminating player 4 we transform the game into one which
is not of zero sum; since we assessed player 4 a value 1, the others retain
together a value 1. We do not yet propose to deal with this situation
systematically. (Cf. footnote 2 on p. 299.) It is obvious, however, that
this condition can be remedied by a slight generalization of the transforma
tion used in 27.1. We modify the game of 1,2,3 by assuming that each
one got the amount i in cash in advance, and then compensating for this
by deducting equivalent amounts from the v(S) values in (35:6*). Just
as in 27.1., this cannot affect the strategy of the game, i.e. it produces a
strategically equivalent game. l
After consideration of the compensations mentioned above 2 we obtain
the new characteristic function :
v'(0) = 0,
v'((l)) = v'((2)) = v'((3)) =  *,
(35:6**) v'((l,2)) = v'((l,3)) = v'((2,3)) = *,
v'((l,2,3)) = 0.
This is the reduced form of the essential zerosum threeperson game dis
cussed in 32. except for a difference in unit : We have now y = $ instead
1 In the terminology of 27.1.1.: a? = al = al . The condition there which
we have infringed is (27:1): ^ aj = 0. This is necessary since we started with a non
t
zerosum game.
Even 2^ a i " could be safeguarded if we included player 4 in our considerations,
t
putting aj = 1. This would leave him just as isolated as before, but the necessary
compensation would make v((4)) = 0, with results which are obvious.
One can sum this up by saying that in the present situation it is not the reduced
form of the game which provides the best basis of discussion among all strategically
equivalent forms.
2 I.e. deduction of as many times J from v(S) as 8 has elements.
SPECIAL POINTS IN THE CUBE Q 301
of the 7 = 1 of (32:1) in 32.1.1. Thus we can apply the heuristic results
of 23.1.3., or the exact results of 32. l Let us restrict ourselves, at any
rate, to the solution which appears in both cases and which is the simplest
one: (32:B) of 32.2.3. This is the set of imputations (32:6) in 32.2.1., which
we must multiply by the present value of y = ; i.e.:
(*,*,!}, ft, it!, lit, *}
(The players are, of course, 1,2,3.) In other words: The aim of the strategy
of the players 1,2,3 is to form any coalition of two; a player who succeeds
in this, i.e. who is victorious, gets f , and a player who is defeated gets $.
Now each of the players 1,2,3 of our original game gets the extra amount
i beyond this, hence the above amounts f , 1 must be replaced by
1, 1.
36.2.3. Summing up:
(35 :B) This is a game in which the player 4 is excluded from all
coalitions. The strategic aim of the other players 1,2,3 is to
form any coalition of two. Player 4 gets 1 at any rate. Any
other player 1,2,3 gets the amount 1 when he is among the win
ners, and the amount 1 when he is defeated. All this is based
on heuristic considerations.
One might say more concisely that this fourperson game is only an
"inflated" threeperson game: the essential threeperson game of the players
1,2,3, inflated by the addition of a "dummy" player 4. We shall see later
that this concept is of a more general significance. (Cf. footnote 2 on
p. 299.)
36.2.4. One might compare the dummy role of player 4 in this game
with the exclusion a player undergoes in the discriminatory solution (32: A)
in 32.2.3., as discussed in 33.1.2. There is, however, an important differ
ence between these two phenomena. In our present setup, player 4 has
really no contribution to make to any coalition at all; he stands apart by
virtue of the characteristic function v(S). Our heuristic considerations
indicate that he should be excluded from all coalitions in all acceptable
solutions. We shall see in 46.9. that the exact theory establishes just
this. The excluded player of a discriminatory solution in the sense of
33.1.2. is excluded only in the particular situation under consideration. As
far as the characteristic function of that game is concerned, his role is the
same as that of all other players. In other words: The "dummy" in our
present game is excluded by virtue of the objective facts of the situation
(the characteristic function v(S)). 2 The excluded player in a discrimi
natory solution is excluded solely by the arbitrary (though stable)
"prejudices" that the particular standard of behavior (solution) expresses.
1 Of course the present discussion is heuristic in any event. As to the exact treat
ment, cf. footnote 2 on p. 299.
2 This is the " physical background,' 1 in the sense of 4.6.3.
302 ZEROSUM FOURPERSON GAMES
We observed at the beginning of this section that the corners //, 777,
IV differ from VIII only by permutations of the players. It is easily
verified that the special role of player 4 in VIII is enjoyed by the players
1,2,3 in 77, 777, 77, respectively.
85.3. Some Remarks Concerning the Interior of Q
35.3.1. Let us now consider the game which corresponds to the center of
Q, i.e. to the values 0,0,0 of the coordinates x\ } 2, g. This game is clearly
unaffected by any permutation of the players 1,2,3,4, i.e. it is symmetric.
Observe that it is the only such game in Q, since total symmetry means
in variance under all permutations of x\, 2, x* and sign changes of any two
of them (cf. 34.3.); hence Xi = x a = x s = 0.
The characteristic function v(S) of this game is:
(35:8) v(S) =
1
when S has
1
1
2 elements.
3
4
(Verification is immediate with the help of (34:1), (34:2), (34:3) in 34.2.1.)
The exact solutions of this game are numerous; indeed, one must say that
they are of a rather bewildering variety. It has not been possible yet to
order them and to systematize them by a consistent application of the exact
theory, to such an extent as one would desire. Nevertheless the known
specimens give some instructive insight into the ramifications of the theory.
We shall consider them in somewhat more detail in 37. and 38.
At present we make only this (heuristic) remark: The idea of this
(totally) symmetric game is clearly that any majority of the players (i.e. any
coalition of three) wins, whereas in case of a tie (i.e. when two coalitions
form, each consisting of two players) no payments are made.
35.3.2. The center of Q represented the only (totally) symmetric game
in our setup: with respect to all permutations of the players 1,2,3,4. The
geometrical picture suggests consideration of another symmetry as well:
with respect to all permutations of the coordinates x\ 9 x^ x 8 . In this way
we select the points of Q with
(?5:9) xi = xi = si,
which form a main diagonal of Q, the line
(35:10) 7centery777.
We saw at the beginning of 34.3.1. that this symmetry means precisely
that the game is invariant with respect to all permutations of the players
1,2,3. In other words:
1 This representation shows once more that the game is symmetric, and uniquely
characterized by this property. Cf. the analysis of 28.2.1.
SPECIAL POINTS IN THE CUBE Q 303
The main diagonal (35:9), (35:10) represents all those games which are
symmetric with respect to the players 1,2,3, i.e. where only player 4 may
have a special role.
Q has three other main diagonals (//center F, ///center F/, /Fcenter
F//), and they obviously correspond to those games where another player
(players 1,2,3, respectively) alone may have a special role.
Let us return to the main diagonal (35:9), (35:10). The three games
which we have previously considered (/, F///, Center) lie on it; indeed in
all these games only player 4 had a special role. 1 Observe that the entire
category of games is a oneparameter variety. Owing to (35:9), such a
game is characterized by the value x\ in
(35:11) 1 g xi ^ 1.
The three games mentioned above correspond to the extreme values x\ = 1,
Xi =  1 and to the middle value Xi = 0. In order to get more insight into
the working of the exact theory, it would be desirable to determine exact
solutions for all these values of Xi, and then to see how these solutions shift
as x\ varies continuously along (35:10). It would be particularly instruc
tive to find out how the qualitatively different kinds of solutions recognized
for the special values x\ = 1, 0, 1 go over into each other. In 36.3.2.
we shall give indications about the information that is now available in this
regard.
35.3.3. Another question of interest is this: Consider a game, i.e. a
point in Q, where we can form some intuitive picture of what solutions to
expect, e.g. the corner F///. Then consider a game in the immediate
neighborhood of F///, i.e. one with only slightly changed values of Xi, x,
x 3 . Now it would be desirable to find exact solutions for these neighboring
games, and to see in what details they differ from the solutions of the
original game, i.e. how a small distortion of x\, xj, x s distorts the solu
tions. 2 Special cases of this problem will be considered in 36.1.2., and at
the end of 37.1.1., as well as in 38.2.7.
35.3.4. So far we have considered games that are represented by points
of Q in more or less special positions. 8 A more general, and possibly more
typical problem arises when the representative point X is somewhere
in the interior of Q, in "general" position, i.e. in a position with no
particular distinguishing properties.
Now one might think that a good heuristic lead for the treatment of
the problem in such points is provided by the following consideration. We
have some heuristic insight into the conditions at the corners /F/// (cf.
35.1. and 35.2.). Any point X of Q is somehow "surrounded" by these
corners; more precisely, it is their center of gravity, if appropriate weights
1 In the center not even he.
1 This procedure is familiar in mathematical physics, where it is used in attacking
problems which cannot be solved in their general form for the time being: it is the analysis
of perturbations.
8 Corners, the center, and entire main diagonals.
304 ZEROSUM FOURPERSON GAMES
are used. Hence one might suspect that the strategy of the games, repre
sented by X, is in some way a combination of the strategies of the (more
familiar) strategies of the games represented by 1 VI II. One might even
hope that this " combination " will in some sense be similar to the formation
of the center of gravity which related X to IVIII. 1
We shall see in 36.3.2. and in 38.2.5.7. that this is true in limited parts
of Q, but certainly not over all of Q. In fact, in certain interior areas of Q
phenomena occur which are qualitatively different from anything exhibited
by I'VIII. All this goes to show what extreme care must be exercised in
dealing with notions involving strategy, or in making guesses about them.
The mathematical approach is in such an early stage at present that much
more experience will be needed before one can feel any selfassurance in
this respect.
36. Discussion of the Main Diagonals
36.1. The Part Adjacent to the Corner VIII.: Heuristic Discussion
36.1.1. The systematic theory of the fourperson game has not yet
advanced so far as to furnish a complete list of solutions for all the games
represented by all points of Q. We are not able to specify even one solution
for every such game. Investigations thus far have succeeded only in
determining solutions (sometimes one, sometimes more) in certain parts of
Q. It is only for the eight corners 1 VI II that a demonstrably complete
list of solutions has been established. At the present the parts of Q in
which solutions are known at all form a rather haphazard array of linear,
plane and spatial areas. They are distributed all over Q but do not fill it
out completely.
The exhaustive list of solutions which are known for the corners IVIII
can easily be established with help of the results of Chapters IX and X,
where these games will be fitted into certain larger divisions of the general
theory. At present we shall restrict ourselves to the casuistic approach
1 Consider two points X = a?i, z 2 , x t \ and Y (t/i, y^ 2/s) in Q. We may view
these as vectors in L 8 and it is indeed in this sense that the formation of a center of
gravity
tX + (1  t)Y  (toi + (1  Ol/i, tx* + (1  Oyt, to, + (1  <)y,}
is understood. (Cf. (16:A:c) in 16.2.1.)
Now if X [xi, z 2 , z and Y = \yi, y^ 2/3) define the characteristic functions
v(S) and w(S) in the sense of (34:l)(34:3) in 34.2.1., then tX + (1  t)Y will give, by
the same algorithm, a characteristic function
u(S) s tv(S) + (1  /)wOSf).
(It is easy to verify this relationship by inspection of the formulae which we quoted.)
And this same u(S) was introduced as center of gravity of v(S) and w() by (27:10) in
27.6.3.
Thus the considerations of the text are in harmony with those of 27.6. That we
are dealing with centers of gravity of more than two points (eight: IVIII) instead of only
two, is not essential: the former operation can be obtained by iteration of the latter.
It follows from these remarks that the difficulties which are pointed out in the text
below have a direct bearing on 27.6.3., as was indicated there.
DISCUSSION OF THE MAIN DIAGONALS
305
which consists in describing particular solutions in cases where such are
known. It would scarcely serve the purpose of this exposition to give a
precise account of the momentary state of these investigations 1 and it
would take up an excessive amount of space. We shall only give some
instances which, it is hoped, are reasonably illustrative.
36.1.2. We consider first conditions on the main diagonal /Center VIII
in Q near its end at VIII, x\ = x 2 = x 3 = 1 (cf . 35.3.3.), and we shall try
VIII
The diagonal / Center VIII redrawn
Center
Figure 63.
to extend over the x\ = #2 = 3 > 1 as far as possible. (Cf. Figure 63.)
On this diagonal
(36:1)
_ ^
2*!
1
when S has
1
2 (and 4 belongs to S)
elements
2 (and 4 does not belong to S)
3
4
(Observe that this gives (35:1) in 35.1.1. for xi = 1 and (35:6) in 35.2.1. for
x\ = 1.) We assume that x\ > 1 but not by too much, just how
much excess is to be permitted will emerge later. Let us first consider this
situation heuristically.
Since Xi is supposed to be not very far from 1, the discussion of 35.2.
may still give some guidance. A coalition of two players from among
1 This will be done by one of us in subsequent mathematical publications.
306 ZEROSUM FOURPERSON GAMES
1,2,3 may even now be the most important strategic aim, but it is no longer
the only one: the formula (35:7) of 35.2.1. is not true, but instead
(36:2) v(S u T) > v(S) + v(T) if SnT =
when T = (4). 1 Indeed, it is easily verified from (36:1) that this excess
is always 2 2(1 + 1). For x\ = 1 this vanishes, but we have x\ slightly
> 1, so the expression is slightly > 0. Observe that for the preceding
coalition of two players other than player 4, the excess in (36 :2) 8 is by
(36:1) always 2(1 Xi). For x\ = 1 this is 4, and as we have x\ slightly
> 1, it will be only slightly < 4.
Thus the first coalition (between two players, other than player 4),
is of a much stronger texture than any other (where player 4 enters into the
picture), but the latter cannot be disregarded nevertheless. Since the
first coalition is the stronger one, it may be suspected that it will form first
and that once it is formed it will act as one player in its dealings with the
two others. Then some kind of a threeperson game may be expected to
take place for the final crystallization.
36.1.3. Taking, e.g. (1,2) for this "first" coalition, the surmised three
person game is between the players (1,2), 3,4. 4 In this game the a, 6, c of
23.1. are a = v((3,4)) = 2x l9 b = v((l,2,4)) = 1, c = v((l,2,3)) = I. 6
Hence, if we may apply the results obtained there (all of this is extremely
heuristic!) the player (1,2) gets the amount a = ^ = 1 x\,
&
if successful (in joining the last coalition), and a = 2xi if defeated. The
player 3 gets the amount = ^ = x\ if successful, and b = 1
z
if defeated. The player 4 gets the amount 7 = ~ = x\ if success
ft
ful, and c = 1 if defeated.
Since "first" coalitions (1,3), (2,3) may form, just as well as (1,2),
there are the same heuristic reasons as in the first discussion of the three
person game (in 21. 22.) to expect that the partners of these coalitions will
split even. Thus, when such a coalition is successful (cf. above), its
members may be expected to get ^  1 each, and when it is defeated
&
Xi each.
36.1.4. Summing up: If these surmises prove correct, the situation is as
follows:
1 Unless S * or T, in which case there is always in (36:2). I.e. in the pres
ent situation S must have one or two elements.
2 By footnote 1 above, S has one or two elements and it does not contain 4.
8 I.e., now S, T are two oneelement sets, not containing player 4.
4 One might say that (1,2) is a juridical person, while 3,4 are, in our picture, natural
persons.
* In all the formulae which follow, remember that Xi is near to 1, i.e. presumably
negative; hence Xi is a gain, and x\ is a loss.
DISCUSSION OF THE MAIN DIAGONALS 307
If the "first" coalition is (1,2), and if it is successful in finding an ally,
and if the player who joins it in the final coalition is player 3, then the
1 _ /* 1 . T
players 1,2,3,4 get the amounts ~ S ~ S Xi, 1 respectively. If the
2i i
player who joins the final coalition is player 4, then these amounts are
replaced by "I Xl , 1 ~ X S 1, XL If the "first" coalition (1,2) is not
& &
successful, i.e. if the players 3,4 combine against it, then the players get the
amounts Xi, xi, Xi, x\ respectively.
If the "first" coalition is (1,3) or (2,3), then the corresponding permuta
tion of the players 1,2,3 must be applied to the above.
36.2. The Part Adjacent to the Corner VIII. : Exact Discussion
36.2.1. It is now necessary to submit all this to an exact check. The
heuristic suggestion manifestly corresponds to the following surmise :
Let V be the set of these imputations :
~~>, (l  si 1 
>
(36:3) a" =
 si 1  xi J
2 ' 2 I an d the imputations which
\ originate from these by per
' ~~* 1 Xl  muting the players, (i.e. the
x i x
"
components) 1,2,3.
(Cf. footnote 5, p. 306.) We expect that this V is a solution in the rigorous
sense of 30.1. if Xi is near to 1 and we must determine whether this is so>
and precisely in what interval of the x\.
This determination, if carried out, yields the following result:
(36: A) The set V of (36:3) is a solution if and only if
1 ^ xi g *.
This then is the answer to the question, how far (from the starting point
xi = 1, i.e. the corner VIII) the above heuristic consideration guides to
a correct result. 1
36.2.2. The proof of (36: A) can be carried out rigorously without any
significant technical difficulty. It consists of a rather mechanical disposal
1 We wish to emphasize that (36: A) does not assert that V is (in the specified range of
Xi) the only solution of the game in question. However, attempts with numerous
similarly built sets failed to disclose further solutions for x\ ^ J (i.e. in the range of
(36:A)). For x\ slightly > I (i.e. slightly outside the range of (36:A)), where the
V of (36: A) is no longer a solution, the same is true for the solution which replaces it.
Cf. (36:B) in 36.3.1.
We do not question, of course, that other solutions of the "discriminatory" type,
as repeatedly discussed before, always exist. But they are fundamentally different from
the finite solutions V which are now under consideration.
These are the arguments which seem to justify our view that some qualitative change
in the nature of the solutions occurs at
x\ G (on the diagonal /center VIII).
308 ZEROSUM FOURPERSON GAMES
of a series of special cases, and does not contribute anything to the clarifica
tion of any question of principle. 1 The reader may therefore omit reading
it if he feels so disposed, without losing the connection with the main course
of the exposition. He should remember only the statement of the results
in (36:A).
Nevertheless we give the proof in full for the following reason: The set
V of (36:3) was found by heuristic considerations, i.e. without using the
exact theory of 30.1. at all. The rigorous proof to be given is based on
30.1. alone, and thereby brings us back to the only ultimately satisfactory
standpoint, that of the exact theory. The heuristic considerations were
only a device to guess the solution, for want of any better technique; and
it is a fortunate feature of the exact theory that its solutions can occasionally
be guessed in this way. But such a guess must afterwards be justified
by the exact method, or rather that method must be used to determine in
what domain (of the parameters involved) the guess was admissible.
We give the exact proof in order to enable the reader to contrast and
to compare explicitly these two procedures, the heuristic and the rigorous.
36.2.3. The proof is as follows:
If x\ = 1, then we are in the corner VIII, and the V of (36:3) coincides
with the set which we introduced heuristically (as a solution) in 35.2.3.,
and which can easily be justified rigorously (cf. also footnote 2 on p. 299).
Therefore we disregard this case now, and assume that
(36:4) xi > 1.
We must first establish which sets Ss/= (1,2,3,4) are certainly
necessary or certainly unnecessary in the sense of 31.1.2. since we are
carrying out a proof which is precisely of the type considered there.
The following observations are immediate:
(36:5) By virtue of (31 :H) in 31.1.5., threeelement sets S are cer
tainly necessary, twoelement sets are dubious, and all other
sets are certainly unnecessary. 2
(36:6) Whenever a twoelement set turns out to be certainly neces
sary, we may disregard all those threeelement sets of which it is
a subset, owing to (31 :C) in 31.1.3.
Consequently we shall now examine the twoelement sets. This of course
must be done for all the a in the set V of (36:3).
1 The reader may contrast this proof with some given in connection with the theory of
the zerosum twoperson game, e.g. the combination of 16.4. with 17.6. Such a proof is
more transparent, it usually covers more ground, and gives some qualitative elucidation
of the subject and its relation to other parts of mathematics. In some later parts of this
theory such proofs have been found, e.g. in 46. But much of it is still in the primitive
and technically unsatisfactory state of which the considerations which follow are typical.
* This is due to n  4.
DISCUSSION OF THE MAIN DIAGONALS 309
Consider first those twoelement sets S which occur in conjunction
>
with a V As aj = 1 we may exclude by (31 :A) in 31.1.3. the possibility
that S contains 4. S = (1,2) would be effective if a{ + a' 2 ^ v((l,2)),
i.e. 1  xi ^ 2xi, xi ^ 1 which is not the case by (36:4). S = (1,3)
is effective if a( + J ^ v((l,3)), i.e. ^^ 2*!, Xl  f Thus
46
the condition
(36:7) Xl ^  i
which we assume to be satisfied makes its first appearance. S = (2,3) we
do not need, since 1 and 2 play the same role in a ' (cf. footnote 1 above).
We now pass to a ". As a' 8 ' = 1 we now exclude the S which contains 3
(cf. above). S = (1,2) is disposed of as before, since a ' and a " agree in
these components. S = (1,4) would be effective if a" + a" g v((l,4)),
i.e. ~ ^ 2xi, xi ^  which, by (36:7), is not the case. S = (2,4) is
discarded in the same way.
Finally we take ~Z r ". S = (1,2) is effective: a'/' + a' 2 " = v((l,2))
i.e. 2xj. = 2xi. S = (1,3) need not be considered for the following
reason: We are already considering S = (1,2) for a '", if we interchange 2
and 3 (cf. footnote 1 above) this goes over into (1,3), with the components
Xi, XL Our original S = (1,3) for a'" with the components Xi, Xi
is thus rendered unnecessary by (31 :B) in 31.1.3., as x t ^ xi owing to
(36:7). S = (2,3) is discarded in the same way. S = (1,4) would be
effective if a'/' + a' 4 " g v((l,4)) i.e. g 2x b xi ^ 0, which, by (36:7), is
not the case. S = (2,4) is discarded in the same way. S = (3,4) is effec
tive: ai" + a'/' = v((3,4)), i.e. 2xi = 2 Xl .
Summing up:
(36:8) Among the twoelement sets S the three given below are
certainly necessary, and all others are certainly unnecessary:
(1,3) for ^', (1,2) and (3,4) for T'".
Concerning threeelement sets S: By (31:A) in 31.1.3. we may exclude
those containing 4 for a ' and 3 for a ". Consequently only (1,2,3) is left
for ' and (1,2,4) for a". Of these the former is excluded by (36:6),
as it contains the set (1,3) of (36:8). For ^'" every threeelement set
1 Here, and in the entire discussion which follows, we shall make use of the freedom to
apply permutations of 1,2,3 as stated in (36:3), in order to abbreviate the argumentation.
Hence the reader must afterwards apply these permutations of 1,2,3 to our results.
310 ZEROSUM FOURPERSON GAMES
contains the set (1,2) or the set (3,4) of (36:8); hence we may exclude it by
(36:6).
Summing up :
(36:9) Among the threeelement sets <S, the one given below is
certainly necessary, and all others are certainly unnecessary: 1
(1,2,4) for 7".
36.2.4. We now verify (30:5:a) in 30.1.1., i.e. that no a of V dominates
any of V.
^ = a ': By (36:8), (36:9) we must use S = (1,3). Can a ' dominate
with this S any 1,2,3 permutation of a ' or a " or a '"? This requires first
the existence of a component < x\ (this is the 3 component of a. ') among
the 1,2,3 components of the imputation in question. Thus a' and a'"
are excluded. 2 Even in a " the 1,2 components are excluded (cf. footnote 2)
but the 3 component will do. But now another one of the 1,2,3 components
of this imputation a " must be < ~  1 (this is the 1 component of a ')
Z
and this is not the case; the 1,2 components of a. " are both =  1 
&
a. = a ": By (36:8), (36:9) we must use S = (1,2,4). Can a " dominate
with this S any 1,2,3 permutation of a ' or a " or a '"? This requires first
that the 4 component of the imputation in question be < x\ (this is the
4 component of a") Thus a "and a "' are excluded. For a 'we must
require further that two of its 1,2,3 components be < ^ (this is the 1
as well as the 2 component of a "), and this is not the case; only one of these
components is 5* ~ *"
"a* = 1?'": By (36:8), (36:9) we must use S = (1,2) and then S = (3,4).
5 = (1,2) : Can a'" dominate with this as described above? This requires
the existence of two components < zi (this is the 1 as well as the 2 com
~"*
ponent of a'") among the 1,2,3 components of the imputation in question.
This is not the case for a'", as only one of these components is j& x\
1 As every threeelement set is certainly necessary by (36:5) above, this is another
instance of the phenomenon mentioned at the end of footnote 1 on p. 274.
1 Indeed 5^ a?i, i.e. x\ rand x\ x\, i.e. Xi both by (36:7).
V
DISCUSSION OF THE MAIN DIAGONALS 311
* *
there. Nor is it the case for a ' or a ", as only one of those components is
1 x\ *
?* s there. 1 S = (3,4): Can a '" dominate with this S as described
t
above? This requires first that the 4 component of the imputation in
>
question be < Xi (this is the 4 component of a "'). Thus a " and a '" are
excluded. For a ' we must require further the existence of a component
< x\ (this is the 3 component of a '") among its 1,2,3 components, and
this is not the case; all these components are ^ Xi (cf. footnote 2 on p. 310).
This completes the verification of (30:5:a).
36.2.5. We verify next (30:5 :b) in 30. 1.1., i.e. that an imputation ft which
is undominated by the elements of V must belong to V.
Consider a ft undominated by the elements of V. Assume first that
04 < Xi. If any one of 0i, 2 , 03 were < Xi, we could make (by permuting
1,2,3) 3 < XL This gives 7'" H 7 with S = (3,4) of (36:8). Hence
ft i, 02, 03 ^ XL
If any two of 0i, 2 , 03 were < ~ 1 , we could make (by permuting 1,2.3)
0i, 02 < ~V^' This gives a " H with S = (1,2,4) of (36:9). Hence,
2
at most one of 0i, 2 , 3 i
1,2,3, we can thus make
at most one of 0i, 2 , 3 is < ~  i.e. two are ^ Xl  By permuting
Clearly 4 ^ 1. Thus each component of is ^ the corresponding com
ponent of a ' , and since both are imputations 2 it follows that they coincide:
= a ', and so it is in V.
Assume next that 4 ^ x\. If any two of 0i, 2 , 0s were < Xi, we
* >
could make (by permuting 1,2,3) 0i, 2 < XL This gives a"' H with
S = (1,2) of (36:8). Hence, at most one of 0i, 2 , 0s is < Xi, i.e. two are
^ XL By permuting 1,2,3 we can make
01, 02 S Xi.
If 0s ^ *i, then all this implies that each component of is ^ the cor
1 And i^ 1 xi, i.e. xi 1.
2 Consequently for both the sum of all components is the same: zero.
312 ZEROSUM FOURPERSON GAMES
responding component of a '", and since both are imputations (cf . footnote 2,
on p. 311) it follows that they coincide: ft = a '", and so it is in V.
Assume therefore that fa < XL If any one of 0i, fa were < ~ "'
I x\ * *
we could make (by permuting 1,2) fa < ^ This gives a'H ft
&
with 5 = (1,3) of (36:8). Hence
A, A ^ ^J
Clearly 8 ^ 1. Thus each component of ft is g: the corresponding com
ponent of a", and since both are imputations (cf. footnote 2, p. 311), it
follows that they coincide: ft = a ", and so it is in V.
This completes the verification of (SOiSib). 1
So we have established the criterion (36: A). 2
36.3. Other Parts of the Main Diagonals
36.3.1. When Xi passes outside the domain (36 :A) of 36.2.1., i.e. when
it crosses its border at x\ = i, then the V of (36:3) id. ceases to be a
solution. It is actually possible to find a solution which is valid for a certain
domain in x\ > i (adjoining x\ = i), which obtains by adding to the
V of (36:3) the further imputations
(36:10) a lv = I j^ 1 , xi, s  1 , xi\ and permutations as in
(36:3). 3
The exact statement is actually this:
(36 :B) The set V of (36:3) and (36:10) is a solution if and only if
 i < x, < O. 4
1 The reader will observe that in the course of this analysis all sets of (36:8), (36:9)
have been used for dominations, and had to be equated successively to all three a ', a ",
"7"' of (36:3).
f Concerning zi = 1, cf. the remarks made at the beginning of this proof.
3 An inspection of the above proof shows that when x\ becomes > J, this goes
wrong: The set S (1,3) (and with it (2,3)) is no longer effective for a '. Of course this
rehabilitates the threeelement set = (1,2,3) which was excluded solely because (1,3)
(and (2,3)) is contained in it.
.Thus domination by this element of V, a ', now becomes more difficult, and it is
therefore not surprising that an increase of the set V must be considered in the search
for a solution.
4 Observe the discontinuity at x\ J which belongs to (36: A) and not to (36:B)l
The exact theory is quite unambiguous, even in such matters.
THE CENTER AND ITS ENVIRONS 313
The proof of (36 :B) is of the same type as that of (36: A) given above,
and we do not propose to discuss it here.
The domains (36 :A) and (36 :B) exhaust the part xi ^ of the entire
available interval 1 g x\ g 1 i.e. the half F/77Center of the diagonal
V/77Center/.
36.3.2. Solutions of a nature similar to V described in (36 :A) of 36.2.1.
and in (36 :B) of 36.3.1., have been found on the other side x\ > i.e.
the half Center/ of the diagonal as well. It turns out that on this half,
qualitative changes occur of the same sort as in the first half covered by
(36: A) and (36 :B). Actually three such intervals exist, namely:
(36 :C) ^ xi < i,
(36 :D) i < xi g *,
(36:E)  g Xl g 1.
(Cf. Figure 64, which is to be compared with Figure 63.)
VIII Omtw J
x *
*.  i * o * 4 i
Figure 64.
We shall not discuss the solutions pertaining to (36 :C), (36:D), (36 :E). 1
The reader may however observe this: x\ = appears as belonging
to both (neighboring) domains (36 :B) and (36 :C), and similarly xi = i to
both domains (36 :D) and (36 :E). This is so because, as a close inspection
of the corresponding solutions V shows that while qualitative changes in the
nature of V occur at x\ = and i, these changes are not discontinuous.
The point Xi = i, on the other hand, belongs to neither neighboring
domain (36 :C) or (36 :D). It turns out that the types of solutions V which
are valid in these two domains are both unusable at x\ = i. Indeed, the
conditions at this point have not been sufficiently clarified thus far.
37. The Center and Its Environs
$7.1. First Orientation Concerning the Conditions around the Center
37.1.1. The considerations of the last section were restricted to a one
dimensional subset of the cube Q: The diagonal FJ/7center/. By using
the permutations of the players 1,2,3,4, as described in 34.3., this can be
made to dispose of all four main diagonals of Q. By techniques that are
similar to those of the last section, solutions can also be found along some
other onedimensional lines in Q. Thus there is quite an extensive net of
lines in Q on which solutions are known. We do not propose to enumerate
them, particularly because the information that is available now corresponds
probably to only a transient state of affairs.
1 Another family of solutions, which also cover part of the same territory, will be
discussed in 38.2. Cf. in particular 38.2.7., and footnote 2 on p. 328.
314 ZEROSUM FOURPERSON GAMES
This, however, should be said: such a search for solutions along isolated
onedimensional lines, when the whole threedimensional body of the cube Q
waits for elucidation, cannot be more than a first approach to the problem.
If we can find a threedimensional part of the cube even if it is a small
one for all points of which the same qualitative type of solutions can be
used, we shall have some idea of the conditions which are to be expected.
Now such a threedimensional part exists around the center of Q. For this
reason we shall discuss the conditions at the center.
37.1.2. The center corresponds to the values 0,0,0 of the coordinates
x\, #2, #3 and represents, as pointed out in 35.3.1., the only (totally) sym
metric game in our setup. The characteristic function of this game is :
(37:1) v(S) =
1
when S has
1
1
2 elements.
3
4
(Cf. (35:8) id.) As in the corresponding cases in 35.1., 35.2., 36.1., we begin
again with a heuristic analysis.
This game is obviously one in which the purpose of all strategic efforts
is to form a threeperson coalition. A player who is left alone is clearly a
loser, any coalition of 3 in the same sense a winner, and if the game should
terminate with two coalitions of two players each facing each other, then
this must obviously be interpreted as a tie.
The qualitative question which arises here is this: The aim in this
game is to form a coalition of three. It is probable that in the negotiations
which precede the play a coalition of two will be formed first. This coalition
will then negotiate with the two remaining players, trying to secure the
cooperation of one of them against the other. In securing the adherence
of this third player, it seems questionable whether he will be admitted into
the final coalition on the same conditions as the two original members.
If the answer is affirmative, then the total proceeds of the final coalition, 1,
will be divided equally among the three participants: i, , . If it is
negative, then the two original members (belonging to the first coalition
of two) will probably both get the same amount, but more than . Thus 1
will be divided somewhat like this: i + c, i + c, i 2e with an e > 0.
37.1.3. The first alternative would be similar to the one which we
encountered in the analysis of the point / in 35.1. Here the coalition
(1,2,3), if it forms at all, contains its three participants on equal terms.
The second alternative corresponds to the situation in the interval analyzed
in 36.1. 2. Here any two players (neither of them being player 4) combined
first, and this coalition then admitted either one of the two remaining play
ers on less favorable terms.
37.1.4. The present situation is not a perfect analogue of either of these.
THE CENTER AND ITS ENVIRONS 315
In the first case the coalition (1,2) could not make stiff terms to player 3
because they absolutely needed him: if 3 combined with 4, then 1 and 2
would be completely defeated; and (1,2) could not, as a coalition, combine
with 4 against 3, since 4 needed only one of them to be victorious (cf. the
description in 35.1.3.). In our present game this is not so: the coalition
(1,2) can use 3 as well as 4, and even if 3 and 4 combine against it, only a
tie results.
In the second case the discrimination against the member who joins
the coalition of three participants last is plausible, since the original coali
tion of two is of a much stronger texture than the final coalition of three.
Indeed, as x\ tends to 1, the latter coalition tends to become worthless; cf.
the remarks at the end of 36.1.2. In our present game no such qualitative
difference can be recognized: the first coalition (of two) accounts for the
difference between defeat and tie, while formation of the final coalition (of
three) decides between tie and victory.
We have no satisfactory basis for a decision except to try both alterna
tives. Before we do this, however, an important limitation of our consider
ations deserves attention.
37.2. The Two Alternatives and the Role of Symmetry
37.2.1. It will be noted that we assume that the same one of the two
alternatives above holds for all four coalitions of three players. Indeed, we
are now looking for symmetric solutions only, i.e. solutions which contain,
along with an imputation a = {i, 2 , a, QU}, all its permutations.
Now a symmetry of the game by no means implies in general the
corresponding symmetry in each one of its solutions. The discriminatory
solutions discussed in 33.1.1. make this clear already for the threeperson
game. We shall find in 37.6. further instances of this for the symmetric
fourperson game now under consideration.
It must be expected, however, that asymmetric solutions for a symmetric
game are of too recondite a character to be discovered by a first heuristic
survey like the present one. (Cf. the analogous occurrence in the three
person game, referred to above.) This then is our excuse for looking at
present only for symmetric solutions.
37.2.2. One more thing ought to be said: it is not inconceivable that,
while asymmetric solutions exist, general organizational principles, like
thflse corresponding to our above two alternatives, are valid either for the
totality of all participants or not at all. This surmise gains some strength
from the consideration that the number of participants is still very low, and
may actually be too low to permit the formation of several groups of par
ticipants with different principles of organization. Indeed, we have only
four participants, and ample evidence that three is the minimum number
for any kind of organization. These somewhat vague considerations will
find exact corroboration in at least one special instance in (43 :L) et seq. of
43.4.2. For the present case, however, we are not able to support them by
any rigorous proof.
316 ZEROSUM FOURPERSON GAMES
87.3. The First Alternative at the Center
37.3.1. Let us now consider the two alternatives of 37.1.2. We take
them up in reverse order.
Assume first that the two original participants admit the third one
under much less favorable conditions. Then the first coalition (of two)
must be considered as the core on which the final coalition (of three) crystal
lizes. In this last phase the first coalition must therefore be expected to
act as one player in its dealings with the two others, thus bringing about
something like a threeperson game. If this view is sound, then we may
repeat the corresponding considerations of 36.1.3.
Taking, e.g. (1,2), as the "first" coalition, the surmised threeperson
game is between the players (1, 2), 3,4. The considerations referred to
above therefore apply literally, only with changed numerical values: a = 0,
6 = c = 1 and so a = 1, ft = y = O. 1
Since the "first" coalition may consist of any two players, there are
heuristic reasons similar to those in the discussion of the threeperson game
(in 21. 22.) to expect that the partners in it will split even: when an ally is
found, as well as when a tie results, the amount to be divided being 1 or
respectively. 2
37.3.2. Summing up: if the above surmises prove correct, the situation is
as follows:
If the "first" coalition is (1,2) and if it is successful in finding an
ally, and if the player who joins it in the final coalition is 3, then the players
1,2,3,4 get the amounts , ,0, 1 respectively. If the "first" coalition
is not successful, i.e. if a tie results, then these amounts are replaced by
0,0,0,0.
If the distribution of the players is different, then the corresponding
permutation of the players 1,2,3,4 must be applied to the above.
It is now necessary to submit all this to an exact check. The heuristic
suggestion manifestly corresponds to the following surmise :
Let V be the set of these following imputations
C\79\ a ' = ^' i> 0> ~M and the imputations which originate from
^"=(00001 these by permuting the players (i.e. the
components) 1,2,3,4.
We expect that this V is a solution.
1 The essential difference between this discussion and that referred to, is that player 4
is no longer excluded from the " first" coalition.
1 The argument in this case is considerably weaker than in the case referred to (or in
the corresponding application in 36,1.3.) since every "first" coalition may now wind up
in two different ways (tie or victory). The only satisfactory decision as to the value of
the argumentation obtains when the exact theory is applied. The desired justification is
actually contained in the proof of 38.2.1. 3.; indeed, it is the special case
y\  y*  y*  y*  l
of (38:D) in 38.2.3.
THE CENTER AND ITS ENVIRONS 317
A rigorous consideration, of the same type as that which constitutes
36.2., shows that this V is indeed a solution in the sense of 30.1. We do
not give it here, particularly because it is contained in a more general proof
which will be given later. (Cf. the reference of footnote 2 on p. 316.)
37.4. The Second Alternative at the Center
37.4.1. Assume next that the final coalition of three contains all its
participants on equal terms. Then if this coalition is, say (1,2,3), the players
1,2,3,4 get the amounts i, i; i, 1 respectively.
It would be rash to conclude from this that we expect the set of impu
tations V to which this leads, to be a solution; i.e. the set of these imputa
tions a = {i, 2, <*3, a*}>
(37:3) a '" = (i, i, i, 1} and permutations as in (37:2).
We have made no attempt as yet to understand how this formation of the
final coalition in "one piece " comes about, without assuming the previous
existence of a favored twoperson core.
37.4.2. In the previous solution of (37 :2) such an explanation is discern
able. The stratified form of the final coalition is expressed by the imputa
tion a ' and the motive for just this scheme of distribution lies in the threat
~* "*
of a tie, expressed by the imputation a ". To put it exactly: the a ' form a
solution only in conjunction with the a ", and not by themselves.
In (37:3) this second element is lacking. A direct check in the sense
of 30.1. discloses that the a "' fulfill condition (30:5 :a) there, but not (30:5 :b).
I.e. they do not dominate each other, but they leave certain other imputa
tions undominated. Hence further elements must be added to V. 1
This addition can certainly not be the a" = {0,0,0,0) of (37:2) since
that imputation happens to be dominated by a '". 2 In other words the
extension (i.e. stabilization, in the sense of 4.3.3.) of a '" to a solution must
be achieved by entirely different imputations (i.e. threats) in the case of
the ~^'" of (37:3) as in the case of the ~' of (37:2).
It seems very difficult to find a heuristic motivation for the steps which
are now necessary. Luckily, however, a rigorous procedure is possible
from here on, thus rendering further heuristic considerations unnecessary,
1 To avoid misunderstandings : It is by no means generally true that any set of imputa
tions which do not dominate each other can be extended to a solution. Indeed, the
problem of recognizing a given set of imputations as being a subset of some (unknown)
solution is still unsolved. Cf . 30.3.7.
In the present case we are just expressing the hope that such an extension will prove
possible for the V of (37:3), and this hope will be further justified below.
With S  (1,2,3).
318 ZEROSUM FOURPERSON GAMES
Indeed, one can prove rigorously that there exists one and only one sym
metric extension of the V of (37:3) to a solution. This is the addition of
these imputations a = {ai, a2, as, <**}
(37:4) ~2 IV = (i, i,  i,  *} and permutations as in (37:2).
37.4.3. If a commonsense interpretation of this solution, i.e. of its
constituent a lv of (37:4), is wanted, it must be said that it does not seem to
be a tie at all (like the corresponding a " of (37:2)) rather, it seems to be
some kind of compromise between a part (two members) of a possible
victorious coalition and the other two players. However, as stated above,
we do not attempt to find a full heuristic interpretation for the V of (37:3)
and (37:4) ; indeed it may well be that this part of the exact theory is already
beyond such possibilities. 1 Besides, some subsequent examples will
illustrate the peculiarities of this solution on a much wider basis. Again
we refrain from giving the exact proof referred to above.
37.5. Comparison of the Two Central Solutions
37.6.1. The two solutions (37:2) and (37:3), (37:4), which we found
for the game representing the center, present a new instance of a possible
multiplicity of solutions. Of course we had observed this phenomenon
before, namely in the case of the essential threeperson game in 33. 1 . 1 . But
there all solutions but one were in some way abnormal (we described this by
terming them " discriminatory"). Only one solution in that case was a
finite set of imputations; that solution alone possessed the same symmetry
as the game itself (i.e. was symmetric with respect to all players). This
time conditions are quite different. We have found two solutions which are
both finite sets of imputations, 2 and which possess the full symmetry of the
game. The discussion of 37.1.2. shows that it is difficult to consider either
solution as " abnormal" or "discriminatory" in any sense; they are distin
guished essentially by the way in which the accession of the last participant
to the coalition of three is treated, and therefore seem to correspond to two
perfectly normal principles of social organization.
37.5.2. If anything, the solution (37:3), (37:4) may seem the less normal
one. Both in (37:2) and in (37:3), (37:4) the character of the solution was
determined by those imputations which described a complete decision, a '
> t ^
and a "' respectively. To these the extra " stabilizing" imputations, a "
> t >
and a /v , had to be added. Now in the first solution this extra a " had an
1 This is, of course, a well known occurrence in mathematicalphysical theories, even
if they originate in heuristic considerations.
* An easy count of the imputations given and of their different permutations shows
that the solution (37:2) consists of 13 elements, and the solution (37:3), (37:4) of 10.
THE CENTER AND ITS ENVIRONS 319
obvious heuristic interpretation as a tie, while in the second solution the
nature of the extra a IV appeared to be more complex.
A more thorough analysis discloses, however, that the first solution
is surrounded by some peculiar phenomena which can neither be explained
nor foreseen by the heuristic procedure which provided easy access to this
solution.
These phenomena are quite instructive from a general point of view too,
because they illustrate in a rather striking way some possibilities and
interpretations of our theory. We shall therefore analyze them in some
detail in what follows. We add that a similar expansion of the second
solution has not been found up to now.
37.6. Unsymmetrical Central Solutions
37.6.1. To begin with, there exist some finite but asymmetrical solutions
which are closely related to (37:2) in 37.3.2. because they contain some of the
imputations {i, i, 0, 1 J. 1 One of these solutions is the one which obtains
when we approach the center along the diagonal /Center V7// from either
side, and use there the solutions referred to in 36.3. I.e.: it obtains by
continuous fit to the domains (36 :B) and (36 :C) there mentioned. (It
will be remembered that the point Xi = 0, i.e. the center, belongs to both
these domains, cf. 36.3.2.) Since this solution can be taken also to express
a sui generis principle of social organization, we shall describe it briefly.
This solution possesses the same symmetry as those which belong to the
games on the diagonal /Center VIII, as it is actually one of them : symmetric
with respect to players 1,2,3, while player 4 occupies a special position. 2
We shall therefore describe it in the same way we did the solutions on
the diagonal, e.g. in (36:3) in 36.2.1. Here only permutations of the
players 1,2,3 are suppressed, while in the descriptions of (37:3) and (37:4) we
suppressed all permutations of the players 1,2,3,4.
37.6.2. For the sake of a better comparison, we restate with this notation
(i.e. allowing for permutations of 1,2,3 only) the definition of our first fully
symmetric solution (37:2) in 37.3.2. It consists of these imputations: 8
7' = (i,i,o, i}
. ft" = {i, i, 1,0} and the imputations which originate from
"?/// (in 1 \\ these by permuting the players 1,2,3.
P = tii U, l, i>
7 /v = {0,0,0,0}
1 I.e. some but not all of the 12 permutations of this imputation.
* That the position of player 4 in the solution is really different from that of the others,
is what distinguishes this solution from the two symmetric ones mentioned before.
Our 7', 7", 7"' exhaust the a' of (37:2) in 37.3.2., while ^ is a ", id.
_ _ * .
a ' had to be represented by the three imputations ', ", ft '" because this
system of representation makes it necessary to state in which one of the three possible
positions of that imputation (i.e. the values }, 0, 1) the player 4 is found.
320 ZEROSUM FOURPERSON GAMES
Now the (asymmetric) solution to which we refer consists of these
imputations:
?' "a*" "?" QQintt7'9*^ and the imputations which origi
(37:5) V P ' P asm^/.z ; nate from these by permuting the
F = (4,0, *, Op players 1,2,3.
Once more we omit giving the proof that (37:5) is a solution. Instead
we shall suggest an interpretation of the difference between this solution
and that of (37:2) i.e. of the first (symmetric) solution in 37.3.2.
37.6.3. This difference consists in replacing
7'" = (4,0, i,i)
by
7 F = (4,0, 4,0}.
That is: the imputation ft '" in which the player 4 would belong to the
"first" coalition (cf. 37.3.1.), i.e. to the group which wins the maximum
amount is removed, and replaced by another imputation ft y . Player 4
now gets somewhat less and the losing player among 1,2,3 (in this arrange
ment player 3) gets somewhat more than in ft '". This difference is pre
cisely 4, so that player 4 is reduced to the tie position 0, and player 3 moves
from the completely defeated position 1 to an intermediate position 4
Thus players 1,2,3 form a " privileged " group and no one from the out
side will be admitted to the "first" coalition. But even among the three
members of the privileged group the wrangle for coalition goes on, since
the "first" coalition has room for two participants only. It is worth noting
that a member of the privileged group may even be completely defeated, as
in ft ", but only by a majority of his " class " who form the " first " coalition
and who may admit the "unprivileged" player 4 to the third membership
of the "final" coalition, to which he is eligible.
37.6.4. The reader will note that this describes a perfectly possible form
of social organization. This form is discriminatory to be sure, although
not in the simple way of the "discriminatory" solutions of the threeperson
game. It describes a more complex and a more delicate type of social
interrelation, due to the solution rather than to the game itself. 2 One
may think it somewhat arbitrary, but since we are considering a "society"
of very small size, all possible standards of behavior must be ad justed rather
precisely and delicately to the narrowness of its possibilities.
We scarcely need to elaborate the fact that similar discrimination
against any other player (1,2,3 instead of 4) could be expressed by suitable
>
1 This imputation ft v is reminiscent in its arrangement of a /F in (37:4) of 37.4.2.,
but it has not been possible to make anything of that analogy.
1 As to this feature, cf. the discussion of. 35.2.4.
NEIGHBORHOOD OF THE CENTER 321
solutions, which could then be associated with the three other diagonals
of the cube Q.
38. A Family of Solutions for a Neighborhood of the Center
38.1. Transformation of the Solution Belonging to the First Alternative at the Center
38.1.1. We continue the analysis of the ramifications of solution (37:2)
in 37.3.2. It will appear that it can be subjected to a peculiar transforma
tion without losing its character as a solution.
This transformation consists in multiplying the imputations (37:2) of
37.3.2. by a common (positive) numerical factor z. In this way the follow
ing set of imputations obtains:
f^Kn ^ ' = 2' 2* ^ ~ Z  anc ^ ^ e i m Putations which originate from
__ these by permuting the players 1,2,3,4.
7" = {0,0,0,0}
In order that these be imputations, all their components must be ^ 1
(i.e. the common value of the v((i)). As z > this means only that
z ^ 1, i.e. we must have
(38:2) < z ^ 1.
For z = 1 our (38:1) coincides with (37:2) of 37.3.2. It would not seem
likely a priori that (38:1) should be a solution for the same game for any
other z of (38:2). And yet a simple discussion shows that it is a solution
if and only if z > $ i.e. when (38*:2) is replaced by
(38:3) f < z g 1.
The importance of this family of solutions is further increased by the fact
that it can be extended to a certain threedimensional piece surrounding
the center of the cube Q. We shall give the necessary discussion in full,
because it offers an opportunity to demonstrate a technique that may be of
wider applicability in these investigations.
The interpretation of these results will be attempted afterwards.
38.1.2. We begin by observing that consideration of the set V defined
by the above (38:1) for the game described by (37:1) in 37.1.2. (i.e. the
center of Q), could be replaced by consideration of the original set V of
(37:2) in 37.3.2. in another game. Indeed, our (38:1) was obtained from
(37:2) by multiplying by z. Instead of this we could keep (37:2) and
multiply the characteristic function (37:1) by l/z; this would destroy the
normalization 7 = 1 which was necessary for the geometrical representation
by Q (cf. 34.2.2.) but we propose to accept that.
What we are now undertaking can be formulated therefore as follows:
So far we have started with a given game, and have looked for solutions.
Now we propose to reverse this process, starting with a solution and looking
322 ZEROSUM FOURPERSON GAMES
for the game. Precisely: we start with a given set of imputations Vi and
ask for which characteristic function v(S) (i.e. games) this V is a solution. 1
Multiplication of the v(S) of (37:1) in 37.1.2. by a common factor means
that we still demand
(38:4) v(/S) = when S is a twoelement set,
but beyond this only the reduced character of the game (cf. 27.1.4.), i.e.
(38:5) v((l)) = v((2))  v((3))  v((4)).
Indeed, this joint value of (38:5) is l/z and therefore (38:4), (38:5) and
(25:3:a) (25:3:b) in 25.3.1. yield that this v(S) is just (37:1) multiplied by
l/z. Our assertion (38:3) above means that the V of (37:2) in 37.3.2. is a
solution for (38:4), (38:5) if and only if the joint value of (38:5) (i.e.  l/z)
is ^ 1 and > f.
38.1.3. Now we shall go one step further and drop the requirement of
reduction, i.e. (38:5). So we demand of v(S) only (38:4), restricting its
values for twoelement sets S. We restate the final form of our question:
(38: A) Consider all zerosum fourperson games where
(38:6) v(S) =0 for all twoelement sets S.
For which among these is the set V of (37:2) in 37.3.2. a solution?
It will be noted that since we have dropped the requirements of normal
ization and reduction of v(S) all connections with the geometrical represen
tation in Q are severed. A special manipulation will be necessary therefore
at the end, in order to put the results which we shall obtain back into the
framework of Q.
38.2. Exact Discussion
38.2.1. The unknowns of the problem (38: A) are clearly the values
(38:7) v((l = 2/1, v((2)) = 2/2, v((3)) = 2/3, v((4)) = y 4 .
We propose to determine what restrictions the condition in (38 :A) actually
places on these numbers 2/1, 2/*> 2/a, 2/4.
1 This reversed procedure is quite characteristic of the elasticity of the mathematical
method for the kind and degree of freedom which exists there. Although initially it
deflects the inquiry into a direction which must be considered unnatural from any but
the strictest mathematical point of view, it is nevertheless effective; by an appropriate
technical manipulation it finally discloses solutions which have not been found in any
other way.
After our previous examples where the guidance came from heuristic considerations,
it is quite instructive to study this case where no heuristic help is relied on and solutions
are found by a purely mathematical trick, the reversal referred to above.
For the reader who might be dissatisfied with the use of such devices (i.e. exclusively
technical and nonconceptual ones), we submit that they are freely and legitimately used
in mathematical analysis.
We have repeatedly found the heuristic procedure easier to handle than the rigorous
one. The present case offers an example of the oooosite.
NEIGHBORHOOD OF THE CENTER 323
This game is no longer symmetric. 1 Hence the permutations of the
players 1,2,3,4 are now legitimate only if accompanied by the corresponding
permutations of yi, y^ y&, y^
To begin with, the smallest component with which a given player k is
ever associated in the vectors of (37:2) in 37.3.2., is 1. Hence the vectors
will be imputations if and only if 1 *z v((fc)) i.e.
(38:8) y k *l for k = 1,2,3,4.
The character of V as a set of imputations is thus established; let us
now see whether it is a solution. This investigation is similar to the proof
given in 36.2.35.
38.2.2. The observations (36:5), (36:6), of 36.2.3., apply again. A two
element set S = (i, j) is effective for a = {i, #2, s, ouj when a + ay ^
(cf. (38:A)). Hence we have for the a ', a" of (37:2): In a " every two
element set S is effective. In a ' : No twoelement set S which does not con
tain the player 4 is effective, while those which do contain him, S = (1,4),
(2,4), (3,4) clearly are. However, if we consider S = (1,4), we may discard
the two others; S = (2,4) arises from it by interchanging 1 and 2, which
does not affect a '; 3 S = (3,4) is actually inferior to it after 1 and 3 are
interchanged, since i ^ O. 4
Summing up:
(38 :B) Among the twoelement sets S, those given below are cer
tainly necessary, and all others are certainly unnecessary:
(1,4) for 7', 6 all for 7".
Concerning threeelement sets: Owing to the above we may exclude by
(36:6) all threeelement sets for a ", and for a ' those which contain (1,4) or
(2,4). 6 This leaves only S = (1,2,3) for a '.
Summing up:
(38 :C) Among the threeelement sets S, the one given below is
certainly necessary, and all others are certainly unnecessary:
(1,2,3) for 7'.
1 Unless y\ y* y\ y<.
2 But there is nothing objectionable in such uses of the permutations of 1,2,3,4 as we
made in the formulation of (37:2) in 37.3.2.
8 This permutation and similar ones later are clearly legitimate devices in spite of
footnote 1 above. Observe footnote 1 on p. 309 and footnote 2 above.
4 As i 1 we could discard all these sets, including S (1,4), when v((4)) 1;
i.e. when 2/4 1, which is a possibility. But we are under no obligation to do this. We
prefer not to do it, in order to be able to treat 3/4 1 and j/ 4 > 1 together.
6 And all permutations of 1,2,3,4; these modify a ' too.
6 The latter obtains from the former by interchanging 1 and 2, which does not affect a '.
324 ZEROSUM FOURPERSON GAMES
We leave to the reader the verification of (30:5 :a) in 30.1.1., i.e. that no
>
a 'of V dominates any ft of V. (Cf . the corresponding part of the proof in
36.2.4. Actually the proof of (30:5:b), which follows, also contains the
necessary steps.)
38.2.3. We next verify (30:5:b) in 30.1.1., i.e. that an imputation ft
which is undominated by the elements of V must belong to V.
>
Consider a ft undominated by the elements of V. If any two of fa, 182,
0s, ft 4 were < 0, we could make these (by permuting 1,2,3,4) fa, 2 < 0.
>
This gives a " H ft with S = (1,2) of (38:B). Hence at most one of fa, 2 ,
03, 4 is < 0. If none is < 0, then all are ^ 0. So each component of ft
is < the corresponding component of a", and since both are imputations
(cf. footnote 2 on p. 311), it follows that they coincide, ft = a"; and so
it is in V.
Hence precisely one of 0i, 2 , 0s, ft 4 is < 0. By permuting 1,2,3,4 we
can make it 4 .
If any two of fa, ft*, 3 were < i, we could make these (by permuting
1,2,3) ft i, 02 < i. Besides, 4 < 0. So the interchange of 3 and 4 gives
a ' H ft with S = (1,2,3) of (38:C). Hence at most one of fa, fa, 3 is < i.
Ifnoneis<, then0i,0 2 , 3 ^i. Hence 4 ^ f. But0 4 ^ v((4)) = y 4 ,
so this necessitates 1/ 4 ^ $, i.e. $/ 4 ^ f . Thus we need i/ 4 < f to
exclude this possibility, and as we are permuting freely 1,2,3,4, we even need
(38:9) y k < } for k = 1,2,3,4.
If this condition is satisfied, then we can conclude that precisely one of
fa, 02, 0a is < i. By permuting 1,2,3, we can make it 3 .
So 0i, 02 ^ i, 03 ^ 0. If 4 ^  1, 1 then each component of is ^ the
>
corresponding component of a ', and since both are imputations (cf . foot
note 2 on p. 311) it follows that they coincide: = a' and so it is in V.
Hence 4 < 1. Also 3 < i. So interchange of 1 and 3 gives
~^' H 7 with S = (1,4) of (38 :B).
This, at last, is a contradiction, and thereby completes the verification
of (30:5 :b) in 30.1. 1.
The condition (38:9), which we needed for this proof, is really necessary:
it is easy to verify that
7' = i*,*,*, i}
1 If v((4)) 1, i.e. if y 4 1, then this is certainly the case; but we do not wish
to assume it. (Cf. footnote 4 on p. 323.)
NEIGHBORHOOD OF THE CENTER 325
is undominated by our V, and the only way to prevent it from being an
imputation is to have  < v((4)) = y 4 , i.e. 2/4 < f. 1 Permuting 1,2,3,4
then gives (38:9).
Thus we need precisely (38:8) and (38:9). Summing up:
(38 :D) The V of (37:2) in 37.3.2. is a solution for a game of (38 :A)
(with (38:6), (38:7) there) if and only if
(38:10) 1 ^ y k <  for h = 1,2,3,4.
38.2.4. Let us now reintroduce the normalization and reduction which
we abandoned temporarily, but which are necessary in order to refer these
results to Q, as pointed out immediately after (38: A).
The reduction formulae of 27.1.4. show that the share of the player fc
must be altered by the amount ajj where
= yk  i(2/i + 2/2 + 2/3 + 2/4)
and
7 = 
= i(2/i + 2/2 + 2/3 + 2/0
For a twoelement set S = (i, j), v(S) is increased from its original value to
<*? + a = 2A + 2/7  i(2/i + 2/2 + 2/3 + 2/4)
= i(2/ + 2/y  yk  yi)
(fc, I are the two players other than i, j).
The above y is clearly ^ 1 > (by (38:10)), hence the game is essential.
The normalization is now carried out by dividing the characteristic function,
as well as every player's share, by y. Thus for S = (t, j), v(S) is now
modified further to
,
y 2/i + y* + 2/3 + 2/4
This then is the normalized and reduced form of the characteristic
function, as used in 34.2.1. for the representation by Q. (34:2) id. gives,
together with the above expression, the formulae
1 Observe that the failure of V to dominate this ' could not be corrected by adding
ft ' to V (when j/ 4 ^ !). Indeed, ft ' dominates a "  (0,0,0,0) with S (1,2,3), so
it would be necessary to remove a " from V, thereby creating new undominated
imputations, etc.
If t/j t/j 2/ t y 4 ss  t then a change of unit by f brings our game back to the
form (37:1) of 37.1.2., and it carries the above ft ' into the o IV  { J, i, i, 1 of (37:3)
in 37.4.1. Thus further attempts to make our V over into a solution would probably
transform it gradually into (37:3), (37:4) in 37.4.12. This is noteworthy, since we
started with (37:2) in 37.3.2.
These connections between the two solutions (37:2) and (37:3), (37:4) should be
investigated further.
326
ZEROSUM FOURPERSON GAMES
(38:11)
x _ yi ~ 2/2  2/8 + 2/4^
1 2/1 + 2/2 + 2/3 + 2/4'
= 2/i + 2/2 2/8 + 2/4 ;
* 2/1 + #2 + 2/8 + 2/4
_ 2/i  2/2 + ys + 2/4 ;
8 yi + 2/2 + 2/8 + 2/4 '
for the coordinates x\, x*, X* in Q.
38.2.5. Thus (38:10) and (38:11) together define the part of Q in which
these solutions i.e. the solution (37:2) in 37.3.2., transformed as indicated
above can be used. This definition is exhaustive, but implicit. Let us
make it explicit. I.e., given a point of Q with the coordinates zi, x^ 3,
let us decide whether (38:10) and (38:11) can then be satisfied together (by
appropriate 2/1, 2/2, 2/s, 2/4).
We put for the hypothetical yi, 2/2, 2/8, i/ 4
2/i + 2/2 + 2/s + 2/4 = 
(38:12)
with z indefinite. Then the equations (38:11) become
(38:12*)
, xi
2/i y* ~ 2/3 + 2/4 = >
,
2/i + 2/2  2/3
2/4 = >
z
2/i  2/2 "h 2/8 + 2/4 = 8>
(38:12) and (38:12*) can be solved with respect to 2/1, 2/2, 2/s,
1 + Xi x z x 9
i/ \ ~~~
(38:13)
I
z
* +
2/2 =
2/4 =
1 + Xi + X 2
Now (38:11) is satisfied, and we must use our freedom in choosing z to
satisfy (38:10).
Let w be the greatest and v the smallest of the four numbers
(38:14)
+
These are known quantities, since Zi, x 2 , 0:3 are assumed to be given.
Now (38:10) clearly means that 1 ^ v/z and that w/z < $ , i.e. it means
that
(38:15) $w <zgv.
Obviously this condition can be fulfilled (for z) if and only if
^38:16) fu; < v.
NEIGHBORHOOD OF THE CENTER 327
And if (38:16) is satisfied, then condition (38:15) allows infinitely many
values an entire interval for z.
38.2.6. Before we draw any conclusions from (38:15), (38:16), we give
the explicit formulae which express what has become of the solution (37:2)
of 37.3.2. owing to our transformations. We must take the imputations
* *
a ', a ", loc. cit., add the amount a* to the component k (i.e. to the player fc's
share), and divide this by y.
These manipulations transform the possible values of the component k
which are i, 0, 1 in (37:2) as follows. We consider first k = 1, and
use the above expressions for a k and y as well as (38:13). Then:
1 g0 e 8 into *5! = 2 + 4y.  (y. + y. + y, +
2 y i + a + 3 + 4
Ogoesinto^ = **  (if. + If. + If. + *) = Xl  Xt  Xt ,
y yi + yt + yt + yt
_ ! goes into Jll+Jf = 4 + 4.(yt + * + * + .)
2/1 + 2/2 + ^8 + 2/4
= Z + Xi ~ Zj ~ X.
For the other k = 2,3,4 these expressions are changed only in so far that their
Xi X2 zs is replaced by Xi + x 2 z 8 , Zi 2 + Zs, Zi + 2 + z$,
respectively. 1
Summing up (and recalling (38:14)):
(38 :E) The component fc is transformed as follows:
i goes into z/2 + w* 1,
goes into u k 1,
1 goes into z + uu 1,
with the Wi, w 2 , MS, u 4 of (38:14).
We leave it to the reader to restate (37:2) with the modification (38 :E),
paying due attention to carrying out correctly the permutations 1,2,3,4
which are required there.
It will be noted that for the center i.e. x\ = x 8 = X* = (38 :E)
reproduces the formulae (38:1) of 38.1.1., as it should.
38.2.7. We now return to the discussion of (38:15), (38:16).
Condition (38:16) expresses that the four numbers 1*1, W2, u*, M 4 of (38:14)
are not too far apart that their minimum is more than f of their maximum
i.e. that on a relative scale their sizes vary by less than 2:3.
This is certainly true at the center, where x\ = x* = X* = 0; there
ui, t*2, MS, ^4 are all =1. Hence in this case v = w = 1, and (38:15)
1 This is immediate, owing to the form of the equations (38:13), and equally by con
sidering the influence of the permutations of the players 1,2,3,4 on the coordinates
Xi, xj, xs as described in 34.3.2.
328 ZEROSUM FOURPERSON GAMES
becomes f < z g 1 proving the assertions made earlier in this discussion
(cf. (38:3) in 38.1.1.).
Denote the part of Q in which (38:16) is true by Z. Then even a
sufficiently small neighborhood of the center belongs to Z. 1 So Z is a
threedimensional piece in the interior of Q, containing the center in its own
interior.
We can also express the relationship of Z to the diagonals of Q, say to
/Center VIIL Z contains the following parts of that diagonal. (Use
Figure 64) : on one side precisely C, on the other a little less than half of
J3.* We add that these solutions are different from the family of solu
tions valid in (36 :B) and (36 :C) which were referred to in 36.3.
38.3. Interpretation of The Solutions
38.3.1. The family of solutions which we have thus determined possesses
several remarkable features.
We note first that for every game for which this family is a solution
at all (i.e. in every point of Z) it gives infinitely many solutions. 1 And all
we said in 37.5.1. applies again: these solutions are finite sets of imputations*
and possess the full symmetry of the game. 6 Thus there is no " discrimina
tion " in any one of these solutions. Nor can the differences in "organi
zational principles," which we discussed loc. cit., be ascribed to them. There
is nevertheless a simple " organizational principle' 1 that can be stated in a
qualitative verbal form, to distinguish these solutions. We proceed to
formulate it.
38.3.2. Consider (38 :E), which expresses the changes to which (37:2)
in 37.3.2. is to be subjected. It is clear that the worst possible outcome
for the player k in this solution is the last expression (since this corresponds
to 1), i.e. z + Uk 1. This expression is > or = 1, according to
whether z is < or = u k . Now Wi, w 2 , MS, w 4 are the four numbers of (38:14),
the smallest of which is v. By (38:15) z v, i.e. always z + w* 1 ^ 1,
and = occurs only for the greatest possible value of z, z = v, and then
only for those k for which Uk attains its minimum, v.
1 If Xi, xs, Xi differ from by < A then each of the four numbers MI, MJ, MI, u 4 of (38:14)
is < 1 f A I and > 1 A 1 ; hence on a relative size they vary by < J : t f.
So we are still in Z. In other words: Z contains a cube with the same center as Q, but
with A of Q's (linear) size.
Actually Z is somewhat bigger than this, its .volume is about yAo of the volume of Q.
* On that diagonal x\ x x t , so the MI, MI, MI, M 4 are: (three times) 1 x\ and
1 f 3xi. So for zi 0, v  1  x b w  1 + 3xi, hence (38:16) becomes x\ < J.
And for si g 0, v  1 + 3xi, w  1  Xi, hence (38:16) becomes >  A. So the
intersection is this:
Xi < t (this is precisely C)
xi >  A (B is xi >  i).
1 The solution which we found contained four parameters: y\, yi, y t , y* while the games
for which they are valid had only three parameters: &i, x>, x v .
4 Each one has 13 elements, like (37:2) in 37.3.2.
1 In the center xi  x t  x t  we have y\  y*  y*  y 4 (cf. (38:13)), i.e. sym
metry in 1,2,3,4. On the diagonal Xi  x t  x 9 we have j/i  y f  y t (cf. (38:13)), i.e.
symmetry in 1,2,3.
NEIGHBORHOOD OF THE CENTER 329
We restate this:
(38 :F) In this family of solutions, even as the worst possible outcome,
a player k faces, in general, something that is definitely better
than what he could get for himself alone, i.e. v((fc)) = 1.
This advantage disappears only when z has its greatest possible
value, z = v, and then only for those k for which the correspond
ing number iii, w 2 , w 8 , w 4 in (38:14) attains the minimum in
(38:14).
In other words: In these solutions a defeated player is in general not
11 exploited " completely, not reduced to the lowest possible level the level
which he could maintain even alone, i.e. v((fc)) = 1. We observed
before such restraint on the part of a victorious coalition, in the "milder"
kind of "discriminatory" solutions of the threeperson game discussed in
33.1. (i.e. when c > 1, cf. the end of 33.1.2.). But there only one player
could be the object of this restraint in any one solution, and this phenomenon
went with his exclusion from the competition for coalitions. Now there is
no discrimination or segregation; instead this restraint applies to all players
in general, and in the center of Q ((38:1) in 38.1.1., with z < 1) the solution
is even symmetric! 1
38.3.3. Even when z assumes its maximum value v, in general only one
player will lose this advantage, since in general the four numbers ui, wj, MS,
Ui of (38:14) are different from each other and only one is equal to their
minimum v. All four players will lose it simultaneously only if u\, w, Wt, u\
are all equal to their minimum v i.e. to each other and one look at
(38:14) suffices to show that this happens only when x\ = x\ = x\ = 0,
i.e. at the center.
This phenomenon of not "exploiting" a defeated player completely is a
very important possible (but by no means necessary) feature of our solu
tions, i.e. of social organizations. It is likely to play a greater role in the
general theory also.
We conclude by stating that some of the solutions which we mentioned,
but failed to describe in 36.3.2., also possess this feature. These are the
solutions in C of Figure 64. But nevertheless they differ from the solutions
which we have considered here.
1 There is a quantitative difference of some significance as well. Both in our present
setup (fourperson game, center of Q) and in the one referred to (threeperson game in
the sense of 33.1.), the best a player can do (in the solutions which we found) is J, and
the worst is 1.
The upper limit of what he may get in case of defeat, in those of our solutions where
he is not completely " exploited," is now  J (i.e. z with \ < z 1) and it was \ then
(i.e. c with 1 & c < J). So this zone now covers the fraction . __ ( 1) ** 4 "* 9*
i.e. 22 % of the significant interval, while it then covered 100%.
CHAPTER VIII
SOME REMARKS CONCERNING n 5 PARTICIPANTS
39. The Number of Parameters in Various Classes of Games
39.1. The Situation for n = 3,4
39.1. We know that the essential games constitute our real problem
and that they may always be assumed in the reduced form and with 7 = 1.
In this representation there exists precisely one zerosum threeperson
game, while the zerosum fourperson games form a threedimensional
manifold. 1 We have seen further that the (unique) zerosum threeperson
game is automatically symmetric, while the threedimensional manifold
of all zerosum fourperson games contains precisely one symmetric game.
Let us express this by stating, for each one of the above varieties of
games, how many dimensions it possesses, i.e. how many indefinite param
eters must be assigned specific (numerical) values in order to characterize a
game of that class. This is best done in the form of a table, given in Figure
65 in a form extended to all n ^ 3. 2 Our statements above reappear in
the entries n = 3,4 of that table.
39.2. The Situation for All n 3
39.2.1. We now complete the table by determining the number of
parameters of the zerosum nperson game, both for the class of all these
games, and for that of the symmetric ones.
The characteristic function is an aggregate of as many numbers v(S)
as there are subsets S in / = (1, , n), i.e. of 2 n . These numbers are
subject to the restrictions (25:3:a)(25:3:c) of 25.3.1., and to those due to the
reduced character and the normalization 7 = 1, expressed by (27:5) in
27.2. Of these (25:3:b) fixes v(S) whenever v(S) is given, hence it
halves the number of parameters: 3 so we have 2 n ~ l instead of 2 n . Next
(25 :3 :a) fixes one of the remaining v (S) : v () ; (27 :5) fixes n of the remaining
v(S):v((l)), , v((n)); hence they reduce the number of parameters
by n + I. 4 So we have 2 n ~ l n 1 parameters. Finally (25:3:c) need
not be considered, since it contains only inequalities.
39.2.2. If the game is symmetric, then v(S) depends only on the number
of elements p of S: v(S) = v p , cf. 28.2.1. Thus it is an aggregate of as
many numbers v p as there are p = 0, 1, , n, i.e. n + 1. These
1 Concerning the general remarks, cf. 27.1.4. and 27.3.2.; concerning the zerosum
threeperson game cf. 29.1.2.; concerning the zerosum fourperson game cf. 34.2.1.
* There are no essential zerosum games for n 1,2!
8 S and S are never the same set!
4 S Q t (1), , (n) differ from each other and from each other's complements.
330
NUMBER OF PARAMETERS
331
numbers are subject to the restrictions (28:ll:a)(28:ll:c) of 28.2.1.; the
reduced character is automatic, and we demand also Vi = y = 1.
(28:11 :b) fixes v w _ p when v p is given; hence it halves the numbers of those
parameters for which np^p. When n  p = p l i.e. n = 2p, which hap
pens only when n is even, and then p = n/2 (28:11 :b) shows that this v p
must vanish. So we have ^ parameters if n is odd and 5 if n is even,
instead of the original n + 1. Next (28:ll:a) fixes one of the remaining
v p : Vo) and Vi = 7 = 1 fixes another one of the remaining v p : Vi;
hence they reduce the number of parameters by 2: 2 so we have
 2
n
or ^ ~ 2 parameters. Finally (28:ll:c) need not be considered since it
contains only inequalities.
39.2.3. We collect all this information in the table of Figure 65. We also
state explicitly the values arising by specialization to n = 3,4,5,6,7,8,
the first two of which were referred to previously.
Number
of players
All games
Symmetric
games
3
0*
0*
4
3
0*
5
10
1
6
25
1
7
56
2
8
119
2
. . .
. . .
n
Of
or n odd
n even
2
2for
* Denotes the game is unique.
Figure 65. Essential games. (Reduced and y 1.)
The rapid increase of the entries in the lefthand column of Figure 65
may serve as another indication, if one be needed, how the complexity of a
game increases with the number of its participants. It seems noteworthy
1 Contrast this with footnote 3 on p. 330!
1 p  0, 1 differ from each other and from each other's n  p. (The latter only
because of n 3.)
332 REMARKS CONCERNING n 5 PARTICIPANTS
that there is an increase in the righthand column too, i.e. for the symmetric
games, but a much slower one.
40. The Symmetric Five person Game
40.1. Formalism of the Symmetric Fiveperson Game
40.1.1. We shall not attempt a direct attack on the zerosum fiveperson
game. The systematic theory is not far enough advanced to allow it;
and for a descriptive and casuistic approach (as used for the zerosum,
fourperson game) the number of its parameters, 10, is rather forbidding.
It is possible however to examine the symmetric zerosum fiveperson
games in the latter sense. The number of parameters, 1, is small but not
zero, and this is a qualitatively new phenomenon deserving consideration.
For n = 3,4 there existed only one symmetric game, so it is for n = 5
that it happens for the first time that the structure of the symmetric game
shows any variety.
40.1.2. The symmetric zerosum five person game is characterized by the
v p , p = 0,1,2,3,4,5 of 28.2.1., subject to the restrictions (28:ll:a)(28:ll:c)
formulated there. (28:ll:a), (28:ll:b) state (with 7 = 1)
(40:1) v = 0, vi=l, v 4 = 1, v 6 =
and v 2 = v 3 , i.e.
(40:2) v 2 = ij, v 3 = 7j
Now (28:ll:c) states v p + g ^ v p + v q for p + q ^ 5 and we can subject
PJ q to the further restrictions of (28:12) id. Therefore p = 1, q = 1,2,*
and so these two inequalities obtain (using (40:1), (40:2)):
p = 1, q = 1: 2 ^ 17; p = 1, q = 2: 1 i? ^ iy;
i.e.
(40:3)  i g rj g 2.
Summing up:
(40 :A) The symmetric zerosum fiveperson game is characterized
by one parameter t\ with the help of (40:1), (40:2). The domain
of variability of y is (40:3).
40.2. The Two Extreme Cases
40.2.1. It may be useful to give a direct picture of the symmetric games
described above. Let us first consider the two extremes of the interval
(40:3):
1 = 2,  *.
1 This is easily verified by inspection of (28:12), or by using the inequalities of foot
note 2 on p. 250. These give 1 p f , 1 ^ 4 j 2; hence as p, q are integers, p 1,
q  1,2.
THE SYMMETRIC FIVEPERSON GAME 333
Consider first 17 = 2: In this case v(S) = 2 for every twoelement
set S; i.e. every coalition of two players is defeated. 1 Thus a coalition of
three (being the set complementary to the former) is a winning coalition.
This tells the whole story: In the gradual crystallization of coalitions, the
point at which the passage from defeat to victory occurs is when the size
increases from two to three, and at this point the transition is 100 %. 2
Summing up:
(40 :B) 77 = 2 describes a game in which the only objective of all
players is to form coalitions of three players.
40.2.2. Consider next 17 = i. In this case we argue as follows:
1
when S has
4
elements.
A coalition of four always wins. 3
Now the above formula shows that a coalition of two is doing just as
well, pro ratdj as a coalition of four; hence it is reasonable to consider the
former just as much winning coalitions as the latter. If we take this
broader view of what constitutes winning, we may again affirm that the
whole story of the game has been told: In the formation of coalitions, the
point at which the passage from defeat to victory occurs is when the size
increases from one to two; at this point the transition is 100 %. 4
Summing up:
(40 :C) rj = i describes a game in which the only objective of all
players is to form coalitions of two players.
40.2.3. On the basis of (40 :B) and (40 :C) it would be quite easy to guess
heuristically solutions for their respective games. This, as well as the
exact proof that those sets of imputations are really solutions, is easy ; but
we shall not consider this matter further.
Before we pass to the consideration of the other rj of (40:3) let us remark
that (40 :B) and (40 :C) are obviously the simplest instances of a general
1 Cf. the discussion in 35.1.1., particularly footnote 4 on p. 296.
2 One player is just as much defeated as two, four are no more victorious than three.
Of course a coalition of three has no motive to take in a fourth partner ; it seems (heuristi
cally) plausible that if they do they will accept him only on the worst possible terms. But
nevertheless such a coalition of four wins if viewed as a unit, since the remaining isolated
player is defeated.
8 In any zerosum nperson game any coalition of n 1 wins, since an isolated player
is always defeated. Cf. loc. cit. above.
4 One player is defeated, two or four players are victorious. A coalition of three
players is a composite case deserving some attention : v (S) is J for a threeelement set
S t i.e. it obtains from the i of a twoelement set by addition of 1. Thus a Coalition of
three is no better than a winning coalition of two (which it contains) plus the remaining
isolated and defeated player separately. This coalition is just a combination of a. win
ning and a defeated group whose situation is entirely unaltered by this operation.
334 REMARKS CONCERNING n 5 PARTICIPANTS
method of defining games. This procedure (which is more general than that
of Chapter X, referred to in footnote 4 on p. 296) will be considered exhaus
tively elsewhere (for asymmetric games also). It is subject to some restric
tions of an arithmetical nature ; thus it is clear that there can be no (essential
symmetric zerosum) nperson game in which every coalition of p is winning
if p is a divisor of n, since then n/p such coalitions could be formed and every
body would win with no loser left. On the other hand the same requirement
with p = n 1 does not restrict the game at all (cf. footnote 3, p. 333).
40.3. Connection between the Symmetric Fiveperson Game and the 1,2,3symmetric
Fourperson Game
40.3.1. Consider now the rj in the interior of (40:3). The situation is
somewhat similar to that discussed at the end of 35.3. We have some
heuristic insight into the conditions at the two ends of (40:3) (cf. above).
Any point ry of (40:3) is somehow " surrounded " by these endpoints. More
precisely, it is their center of gravity if appropriate weights are used. 1
The remarks made loc. cit. apply again: while this construction represents
all games of (40:3) as combinations of the extreme cases (40 :B), (40 :C), it
is nevertheless not justified to expect that the strategies of the former can
be obtained by any direct process from those of the latter. Our experiences
in the case of the zerosum fourperson game speak for themselves.
There is, however, another analogy with the fourperson game which
gives some heuristic guidance. The number of parameters in our case is the
same as for those zerosum fourperson games which are symmetric with
respect to the players 1,2,3; we have now the parameter 77 which runs over
(40:3)  i ^ T? g 2,
while the games referred to had the parameter x\ which varies over
(40:4) 1 ^ xi I. 2
This analogy between the (totally) symmetric fiveperson game and the
1,2,3symmetric fourperson game is so far entirely formal. There is,
however, a deeper significance behind it. To see this we proceed as follows :
40.3.2. Consider a symmetric fiveperson game F with its 77 in (40:3).
Let us now modify this game by combining the players 4 and 5 into one
person, i.e. one player 4'. Denote the new game by F'. It is important to
realize that F' is an entirely new game: we have not asserted that in F
players 4 and 5 will necessarily act together, form a coalition, etc., or that
there are any generally valid strategical considerations which would moti
vate just this coalition. 8 We have forced 4 and 5 to combine; we did this
by modifying the rules of the game and thereby replacing F by F'.
1 The reader can easily carry out this composition in the sense of footnote 1 on p. 304,
relying on our equations (40:1), (40:2) in 40.1.2.
* Cf . 35.3.2. In the representation in Q used there, x\ = xt x t .
9 This ought to be contrasted with the discussion in 36.1.2., where a similar combina
tion of two players was formed under such conditions that this merger seemed strategi
cally justified.
THE SYMMETRIC FIVEPERSON GAME 335
Now T is a symmetric fiveperson game, while r' is a 1,2,3symmetric
fourperson game. 1 Given the > of T we shall want to determine the
xi of I" in order to see what correspondence of (40:3) and (40:4) this defines.
Afterwards we shall investigate whether there are not, in spite of what was
said above, some connections between the strategies i.e. the solutions of
T and T'.
The characteristic function v'(S) of r' is immediately expressible in
terms of the characteristic function v(S) of T. Indeed:
v'((l)) = v((l)) = 1, v'((2)) = v((2)) = 1,
v'((3)) = v((3)) = 1, v'((4')) = v((4,5)) = i,;
v'((l,2)) = v((l,2)) = ,, v'((l,3)) = v((l,3)) = n,
v'((2,3)) = v((2,3)) . ,, v'((l,4')) = v((l,4,5))  ,,
v'((2,4')) = v((2,4,5)) = , v'((3,4')) = v((3,4,5)) = ,;
v'((l,2,3)) = v((l,2,3)) = n, v'((l,2,4')) = v((l,2,4,5)) = 1,
v'((l,3,4')) = v((l,3,4,5)) = 1, v'((2,3,4')) = v((2,3,4 > 5)) = 1;
and of course
v'(0) = v'((l,2,3,4')) = 0.
While F was normalized and reduced, r" is neither; and we must bring
T' into that form since we want to compute its x\, 2, 3, ie. refer it to the
Q of 34.2.2.
Let us therefore apply first the normalization formulae of 27.1.4. They
show that the share of the player k = 1,2,3,4' must be altered by the
amount ajj where
al =, 
and
7 = 
Hence
_
3 l 5
This 7 is clearly ^ ^~ = ^ > (by (40:3)); hence the game is
essential. The normalization is now carried out by dividing every player's
share by 7.
Thus for a twoelement set S = (i, j), v'(S) is replaced by
7
Consequently a simple computation yields
o t i?
v"((l,4')) = v"((2,4')) = v"((3,4')) = 2(3 " ~ 1} 
1 The participants in r are players 1,2,3,4,5, who all have the same role in the original
T. The participants in r' are players 1,2,3 and the composite player (4,5) : 4'. Clearly
1,2,3 have still the same role, but 4' is different.
336 REMARKS CONCERNING n 5 PARTICIPANTS
This then is the normalized and reduced form of the characteristic
function, as used in 34.2. for the representation by Q. (34:2) in 34.2.1
gives, together with the above expression, the formula
Taking x\ = x* = x* for granted, this relation can also be written as follows:
(40:5) (3  xO(3 + n) = 10.
Now it is easy to verify that (40:5) maps the Tjdomain (40:3) on the
Xidomain (40:4). The mapping is obviously monotone. Its details are
shown in Figure 66 and in the adjoining table of corresponding values of
xi and 17. The curve in this figure represents the relation (40:5) in the x\ 9
lyplane. This curve is clearly (an arc of) a hyperbola.
40.3.3. Our analysis of the 1,2,3symmetric fourperson game has
culminated in the result stated in 36.3.2.: The game, i.e. the diagonal
7CenterV7/7 in Q which represents them, is divided into five classes
AE, each of whrch is characterized by a certain qualitative type of
solutions. The positions of the zones AE on the diagonal 7Center
VIII, i.e. the interval 1 g x\ ^ 1, are shown in Figure 64.
The present results suggest therefore the consideration of the cor
responding classes of symmetric fiveperson games F in the hope that some
heuristic lead for the detection of their solutions may emerge from their
comparison with the 1,2,3symmetric fourperson games F, class by class.
Using the table in Figure 66 we obtain the zones AE in ^ 17 g 2,
which are the images of the zones AE in 1 x\ ^ 1. The details
appear in Figure 67.
A detailed analysis of the symmetric fiveperson games can be carried
out on this basis. It discloses that the zones A, B do indeed play the role
which we expect, but that the zones C, D, E must be replaced by others,
<?', D'. These zones AD' in ^ t\ z* 2 and their inverse images AD'
in 1 ^ Xi ^ 1 (again obtained with the help of the table of Figure 66)
are shown in Figure 68.
It is remarkable that the Zidiagram of Figure 68 shows more symmetry
than that of Figure 67, although it is the latter which is significant for the
1,2,3symmetric fourperson games.
40.3.4. The analysis of symmetric fiveperson games has also some
heuristic value beyond the immediate information it gives. Indeed, by
comparing the symmetric fiveperson game F and the 1,2,3symmetric
fourperson game F' which corresponds to it, and by studying the differences
between their solutions, one observes the strategic effects of the merger
of players 4 and 5 in one (composite) player 4'. To the extent to which the
solutions present no essential differences (which is the case in the zones A,
B, as indicated above) one may say that this merger did not affect the really
THE SYMMETRIC FIVEPERSON GAME 337
Figure 66.
Corresponding values of Xi and 17:
*i: 1 I i } J i J 1
*:! i i A ! 1 2
1
A \ I
< C D \ B 
I
t i 1
1 A
1 B \C\ D I fi
1
o t i A I i
Figure 67.
I B  C
t
l
i o
A 1*1*1
Figure 68.
338 REMARKS CONCERNING n 5 PARTICIPANTS
significant strategic considerations. 1 On the other hand, when such differ
ences arise (this happens in the remaining zones) we face the interesting
situation that even when 4 and 5 happen to cooperate in T, their joint
position is dislocated by the possibility of their separation. 2
Space forbids a fuller discussion based on the rigorous concept of solutions.
1 Of course one must expect, in solutions of F, arrangements where the players 4 and 5
are ultimately found in opposing coalitions. It is clear that this can have no parallel
in F'. All we mean by the absence of essential differences is that those imputations in a
solution of T which indicate a coalition of 4 and 5 should correspond to equivalent imputa
tions in the solution of r'.
These ideas require further elaboration, which is possible, but it would lead too far
to undertake it now.
1 Already in 22.2., our first discussion of the threeperson game disclosed that the
division of proceeds within a coalition is determined by the possibilities of each partner
in case of separation. But this situation which we now visualize is different. In our
present r it can happen that even the total share of player 4 plus player 5 is influenced
by this "virtual" fact.
A qualitative idea of such a possibility is best obtained by considering this : When a
preliminary coalition of 4 and 5 is bargaining with prospective further allies, their bar
gaining position will be different if their coalition is known to be indissoluble (in r')
than when the opposite is known to be a possibility (in r).
CHAPTER IX
COMPOSITION AND DECOMPOSITION OF GAMES
41. Composition and Decomposition
41.1. Search for nperson Games for Which All Solutions Can Be Determined
41.1.1. The last two chapters will have conveyed a specific idea of the
rapidity with which the complexity of our problem increases as the number
n of participants goes to 4,5, etc. In spite of their incompleteness,
those considerations tended to be so voluminous that it must seem com
pletely hopeless to push this casuistic approach beyond five participants. 1
Besides, the fragmentary character of the results gained in this manner very
seriously limits their usefulness in informing us about the general possibilities
of the theory.
On the other hand, it is absolutely vital to get some insight into the
conditions which prevail for the greater values of n. Quite apart from the
fact that these are most important for the hoped for economic and socio
logical applications, there is also this to consider: With every increase of
n, qualitatively new phenomena appeared. This was clear for each of
n = 2,3,4 (cf. 20.1.1., 20.2., 35.1.3., and also the remarks of footnote 2
on p. 221), and if we did not observe it for n = 5 this may be due to our
lack of detailed information about this case. It will develop later, (cf. the
end of 46.12.), that very important qualitative phenomena make their
first appearance for n = 6.
41.1.2. For these reasons it is imperative that we find some technique
for the attack on games with higher n. In the present state of things we
cannot hope for anything systematic or exhaustive. Consequently the
natural procedure is to find some special classes of games involving many
participants 2 that can be decisively dealt with. It is a general experience
in many parts of the exact and natural sciences that a complete under
standing of suitable special cases which are technically manageable, but
which embody the essential principles has a good chance to be the pace
maker for the advance of the systematic and exhaustive theory.
We will formulate and discuss two such families of special cases. They
can be viewed as extensive generalizations of two fourperson games so
that each one of these will be the prototype of one of the two families.
These two fourperson games correspond to the 8 corners of the cube Q,
introduced in 34.2.2.: Indeed, we saw that those corners presented only
1 As was seen in Chapter VIII, it was already necessary for five participants to
restrict ourselves to the symmetric case.
1 And in such a manner that each one plays an essential role!
339
340 COMPOSITION AND DECOMPOSITION OF GAMES
two strategically different types of games the corners /, Vj VI, VII,
discussed in 35.1. and the corners //, ///, IV, VIII, discussed in 35.2.
Thus the corners / and VIII of Q are the prototypes of those generaliza
tions to which this chapter and the following one will be devoted.
41.2. The First Type. Composition and Decomposition
41.2.1. We first consider the corner VIII of Q, discussed in 35.2. As
was brought out in 35.2.2. this game has the following conspicuous feature:
The four participants fall into two separate sets (one of three elements and
the other of one) which have no dealings with each other. I.e. the players
of each set may be considered as playing a separate game, strictly among
themselves and entirely unrelated to the other set.
The natural generalization of this is to a game F of n = A; + I par
ticipants, with the following property: The participants fall into two sets of
k and I elements, respectively, which have no dealings with each other.
I.e. the players of each set may be considered playing a separate game, say
A and H respectively, strictly among themselves and entirely unrelated
to the other set. 1
We will describe this relationship between the games T, A, H by the
following nomenclature: Composition of A, H produces F, and conversely
F can be decomposed into the constituents A, H. 2
41.2.2. Before we deal with the above verbal definitions in an exact
way, some qualitative remarks may be appropriate:
First, it should be noted that our procedure of composition and decompo
sition is closely analogous to one which has been successfully applied in
many parts of modern mathematics. 3 As these matters are of a highly
technical mathematical nature, we will not say more about them here.
Suffice it to state that our present procedure was partly motivated by those
analogies. The exhaustive but not trivial results, which we shall obtain
1 In the original game of 35.2. the second set consisted of one isolated player, who was
also termed a " dummy." This suggests an alternative generalization to the above one:
A game in which the participants fall into two sets such that those of the first set play a
game strictly among themselves etc., while those of the second set have no influence upon
the game, neither regarding their own fate, nor that of the others. (These are then the
"dummies.")
This is, however, a special case of the generalization in the text. It is subsumed in
it by taking the game H of the second set as an inessential one, i.e. one which has a definite
value for each one of its participants that cannot be influenced by anybody. (Cf. 27.3.1.
and the end of 43.4.2. A player in an inessential game could conceivably deteriorate his
position by playing inappropriately. We ought to exclude this possibility for a
"dummy" but this point is of little importance.)
The general discussion, which we are going to carry out (both games A and H essen
tial) will actually disclose a phenomenon which does not arise in the special case to which
the corner VIII of 35.2. belongs i.e. the case of "dummies" (H inessential). The
new phenomenon will be discussed in 46.7., 46.8., and the case of "dummies" where
nothing new happens in 46.9.
2 It would seem natural to extend the concepts of composition and decomposition to
more than 2 constituents. This will be carried out in 43.2., 43.3.
3 Cf. G. Birkhoff & S. MacLane: A Survev of Modern Ahrebra. New York. 1941.
Chapt. XIII.
COMPOSITION AND DECOMPOSITION 341
and also be able to use for further interpretations are a rather encouraging
symptom from a technical point of view.
41.2.3. Second, the reader may feel that the operation of composition
is of an entirely formal and fictitious nature. Why should two games,
A and H, played by two distinct sets of players and having absolutely no
influence upon each other, be considered as one game F?
Our result will disclose that the complete separation of the games A
and H, as far as the rules are concerned, does not necessarily imply the
same for their solutions. I.e.: Although the two sets of players cannot
influence each other directly, nevertheless when they are regarded as one
set, one society there may be stable standards of behaviour which
establish correlations between them. 1 The significance of this circumstance
will be elaborated more fully when we reach it loc. cit.
41.2.4. Besides, it should be noted that this procedure of composition
is quite customary in the natural sciences as well as in economic theory.
Thus it is perfectly legitimate to consider two separate mechanical systems
situated, to take an extreme case, say one on Jupiter and one on Uranus
as one. It is equally feasible to consider the internal economies of two
separate countries the connections between which are disregarded as one.
This is, of course, the preliminary step before introducing the interacting
forces between those systems. Thus we could choose in our first example
as those two systems the two planets Jupiter and Uranus themselves
(both in the gravitational field of the Sun), and then introduce as interaction
the gravitational forces which the planets exert on each other. In OUT
second example, the interaction enters with the consideration of inter
national trade, international capital movements, migrations, etc.
We could equally use the decomposable game F as a stepping stone
to other games in its neighborhood, which, in their turn, permit no decompo
sition. 2
In our present considerations, however, these latter modifications will
not be considered. Our interest is in the correlations introduced by the
solutions referred to at the beginning of this paragraph.
41.3. Exact Definitions
41.3.1. Let us now proceed to the strictly mathematical description
of the composition and decomposition of games.
Let k players 1', , fc', forming the set J = (!', , k f ) play the
game A; and I players 1", , I", forjning the set K = (1", , V)
play the game H. We reemphasize that A and H are disjoint sets of
players and 8 that the games A and H are without any influence upon each
1 There is sortie analogy between this and the phenomenon noted before (cf. 21.3.,
37.2.1.) that a symmetry of the game need not imply the same symmetry in all solutions.
1 Cf. 35.3.3., applied to the neighborhood of corner /., which according to 35.2. is a
decomposable game. The remark of footnote 2 on p. 303 on perturbations is also
pertinent.
8 If the same players 1, t n are playing simultaneously two games, then an
entirely different situation prevails. That is the superposition of games referred to in
342 COMPOSITION AND DECOMPOSITION OF GAMES
other. Denote the characteristic functions of these two games by v&(S)
and v H (!T) respectively, where S zJ and T K.
In forming the composite game F, it is convenient to use the same
symbols 1', ,*', 1", ,/" for its n = k + I players. 1 They
form the set / = J u K = (!', ,*', 1", , V).
Clearly every set R / permits a unique representation
(41:1) # = SuT, SsJ, TzK;
the inverse of this formula being
(41:2) S = #n/, T = R n K*
Denote the characteristic function of the game F by vr(fi) with R /.
The intuitive fact that the games A and H combine without influencing
each other to F has this quantitative expression : The value in F of a coalition
R I obtains by addition of the value in A of its part S (s J) in J and of
the value in H of its part T (s K) in K. Expressed by a formula:
(41:3) vr(fl) = v A (S) + v R (T) where R, S, T are linked by (41:1),
i.e. (41 :2). 8
41.3.2. The form (41:3) expressed the composite vr(R) by means of its
constituents VA(), v H (T). However, it also contains the answer to the
inverse problem: To express VA(/S), v H (T) by v r (#).
Indeed v A (0) = v H (0) = O. 4 Hence putting alternately T =
and S = Q in (41:3) gives:
(41:4) v A (5)=v r (S) for SsJ,
(41:5) v H (r) = v r (r) for T <= #. 6
We are now in a position to express the fact of the decomposdbility of the
game F with respect to the two sets J and K. I.e.: the given game F
(among the elements of / = J u K) is such that it can be decomposed into
two suitable games A (among the elements J) and H (among the elements
of K). As stated, this is an implicit property of F involving the existence
of the unknown A, H. But it will be expressed as an explicit property of F.
Indeed: If two such A, H exist, then they cannot be anything but those
described by (41 :4), (41 :5). Hence the property of F in question is, that the
27.6.2. and also in 35.3.4. Its influences on the strategy are much more complex and
scarcely describable by general rules, as was pointed out at the latter loc. cit.
1 Instead of the usual 1, , n.
1 These formulae (41:1), (41 :2) have an immediate verbal meaning. The reader may
find it profitable to formulate it.
8 Of course, a rigorous deduction on the basis of 25.1.3. is feasible without difficulty.
All of 25.3.2. applies in this case.
4 Note that the empty set is a subset of both / and K\ since J and K are disjoint,
it is their only common subset.
6 This is an instance of the technical usefulness of our treating the empty set as a
coalition. Of. footnote 2 on p. 241.
COMPOSITION AND DECOMPOSITION 343
A, H of (41:4), (41:5) fulfill (41:3). Substituting, therefore, (41:4), (41:5)
into (41 :3), using (41 :1) to express R in terms of /S, T gives this:
(41 :6) v r (S u T) = v r (S) + v r (T) for S J, T 7 S #.
Or, if we use (41 :2) (expressing S,,r in terms of R) in place of (41 :1)
(41 :7) vr(fi) = v r (fi n J) + v r (fl n X) for Bel.
41.3.3. In order to see the role of the equations (41:6), (41:7) in the
proper light, a detailed reconsideration of the basic principles upon which
they rest, is necessary. This will be done in sections 41.4.42.5.2. which
follow. However, two remarks concerning the interpretation of these
equations can be made immediately.
First: (41:6) expresses that a coalition between a set S s J and a set
T S K has no attraction that while there may be motives for players
within J to combine with each other, and similarly for players within K,
there are no forces acting across the boundaries of J and K.
Second: To those readers who are familiar with the mathematical theory
of measure, we make this further observation in continuation of that
made at the end of 27.4.3.: (41:7) is exactly Carath6odory's definition of
measurability. This concept is quite fundamental for the theory of additive
measure and Carath6odory's approach to it appears to be the technically
superior one to date. l Its emergence in the present context is a remarkable
fact which seems to deserve further study.
41.4. Analysis of Decomposability
41.4.1. We obtained the criteria (41:6), (41:7) of Y's decomposability
by substituting the v A (/S), v H (T r ) obtained from (41:4), (41:5) into the
fundamental condition (41 :3). However, this deduction contains a lacuna:
We did not verify whether it is possible to find two games A, H which pro
duce the v A (S), v H (5P) formally defined by (41:4), (41:5).
There is no difficulty in formalizing these extra requirements. As we
know from 25.3.1. they mean that v*(S) and v H (T) fulfill the conditions
(25:3:a)(25:3:c) eod. It must be understood that we assume the given
Vr(B) to originate from a game F, i.e. that vp(fi) fulfills these conditions.
Hence the following question presents itself:
(41 :A) v r (B) fulfills (25:3:a)(25:3:c) in 25.3.1. together with the
above (41:6), i.e. (41:7). Will then the v A (S) and v H (T) of
(41:4), (41:5) also fulfill (25:3:a)(25:3:c) in 25.3.1.? Or, if this
is not the case, which further postulate must be imposed upon
vr(B)?
To decide this, we check (25:3:a)(25:3:c) of 25.3.1. separately for v A (S)
and v H (7 7 ). It is convenient to take them up in a different order.
41.4.2. Ad (25:3:a): By virtue of (41:4), (41:5), this is the same state
ment for VA(*S) and v H (T) as for v r (#).
1 Cf. C. Carathtodwy: Vorlesungen liber Reelle Funktionen, Berlin, 1918, Chapt. V.
344 COMPOSITION AND DECOMPOSITION OF GAMES
Ad (25:3:c): By virtue of (41:4), (41:5), this carries over to v A (S) and
Vn(T) from Vr(fi) it amounts only to a restriction from the R c I to
S J and T s K.
Before discussing the remaining (25:3:b), we insert a remark concerning
(25:4) of 25.4.1. Since this is a consequence of (25:3:a)(25:3:c), it is
legitimate for us to draw conclusions from it and it will be seen that this
anticipation simplifies the analysis of (25:3:b).
From here on we will have to use promiscuously complementary sets in
7, J, K. It is, therefore, necessary to avoid the notation S, and to write
instead I S, J S, K S, respectively.
Ad (25:4) : For v A (S) and v H (T) the role of the set I is taken over by the
sets J and K, respectively. Hence this condition becomes:
v A (J) = 0,
0.
Owing to (41:4), (41 :5), this means
(41:8) VrO/) =0,
(41:9) vp(K) = 0.
Since K = I /, therefore (25:3:b) (applied to v r OS) for which it was
assumed to hold) gives
(41 :10) v r (J) + Vr(K) = 0.
Thus (41:8) and (41:9) imply each other by virtue of the identity (41:10).
In (41:8) or (41:9) we have actually a new condition, which does not
follow from (41:6) or (41:7).
Ad (25:3:b): We will derive its validity for v A (S) and v H (7) from the
assumed one for VF(#). By symmetry it suffices to consider v&(S).
The relation to be proven is
(41:11) v A (S) + v&(J  S) = 0.
By (41 :4) this means
(41:12) vrOS) + v T (J  S) 0.
Owing to (41:8), which we must require anyhow, this may be written
(41:13) vrOS) + v r (J  S) = v r (J)
(Of course, S J.)
To prove (41:13), apply (25:3:b) for v r (fl) to R = J  S and R = J.
For these sets /  R = Su K and /  R = K, respectively. So (41:13)
becomes
VrOS)  VrOSutf) = v r (X),
i.e.
VrOS u K) = vrOS) + v r (X),
and this is the special case of (41:6) with T = K.
MODIFICATION OF THE THEORY 345
Thus we have filled in the lacuna mentioned at the beginning of this
paragraph and answered the questions of (41: A).
(41 :B) The further postulate which must be imposed upon vr(fi)
is this: (41:8), i.e. (41:9).
All these put together answer the question of 41.3.2. concerning decom
posability:
(41 :C) The game F is decomposable with respect to the sets J and K
(cf. 41.3.2.) if and only if it fulfills these conditions: (41:6), i.e.
(41:7) and (41:8), i.e. (41:9).
41.5. Desirability of a Modification
41.5.1. The two conditions which we proved equivalent to decompos
ability in (41 :C) are of very different character. (41:6) (i.e. (41:7)) is the
really essential one, while (41 :8) (i.e. (41 :9)) expresses only a rather inciden
tal circumstance. We will justify this rigorously below, but first a quali
tative remark will be useful. The prototype of our concept of decomposition
was the game referred to at the beginning of 41.2.1.: the game represented
by the corner VIII of 35.2. Now this game fulfilled (41:6), but not
(41:8). (The former follows from (35:7) in 35.2.1., the latter from v(J) =
v((l,2,3)) = 1 7* 0.) We nevertheless considered that game as decom
posable (with J = (1,2,3), K = (4)) how is it then possible, that it
violates the condition (41 :8) which we found to be necessary for the decom
posability?
41.5.2. The answer is simple: For the above game the constituents A
(in J = (1,2,3)) and H (in K = (4)) do not completely satisfy (25:3:a)
(25:3:c) in 25.3.1. To be precise, they do not fulfill the consequence
(25:4) in 25.4.1.: V A (/) = V H (#) = is not true (and it was from this
condition that we derived (41:8)). In other words: the constituents of F
are not zerosum games. This point, of course, was perfectly clear in
35.2.2., where it received due consideration.
Consequently we must endeavor to get rid of the condition (41:8),
recognizing that this may force us to consider other than zerosum games.
42. Modification of the Theory
42.1. No Complete Abandoning of the Zerosum Condition
42.1. Complete abandonment of the zerosum condition for our games 1
would mean that the functions 3C*(ri, , r n ) which characterized it
in the sense of 11.2.3. are entirely unrestricted. I.e. that the requirement
(42:1) 2) JC*(n, , r n ) BE
Jbl
1 We again denote the players by 1, , n.
346 COMPOSITION AND DECOMPOSITION OF GAMES
of 11.4. and 25.1.3. is dropped, with nothing else to take its place. This
would necessitate a revision of considerable significance, since the construc
tion of the characteristic function in 25. depended upon (25:1), i.e. (42:1), and
would therefore have to be taken up de novo.
Ultimately this revision will become necessary (cf . Chapter XI) but not
yet at the present stage.
In order to get a precise idea of just what is necessary now, let us make
the auxiliary considerations contained in 42.2.1., 42.2.2. below.
42.2. Strategic Equivalence. Constantsum Games
42.2.1. Consider a zerosum game F which may or may not fulfill
conditions (41:6) and (41:8). Pass from F to a strategically equivalent
game r' in the sense of 27.1.1., 27.1.2., with the a?, , aj described
there. It is evident, that (41 :6) for F is equivalent to the same for F'. 1
The situation is altogether different for (41:8). Passage from F to
F' changes the left hand side of (41:8) by ajj, hence the validity of
kinJ
(41:8) in one case is by no means implied by that in the other. Indeed
this is true:
(42 :A) For every F it is possible to choose a strategically equivalent
game F' so that the latter fulfills (41:8).
n
Proof: The assertion is 1 that we can choose aj, , a with ] aj =
*i
(this is (27:1) in 27.1.1.) so that
v(J) + aj =
kinJ
Now this is obviously possible if J 7* or /, since then ] a can be
*m J
given any assigned value. For J = Q or J, there is nothing to prove, as
then v(J) = by (25:3:a) in 25.3.1. and (25:4) in 25.4.1.
This result can be interpreted as follows: If we refrain from considering
other than zerosum games, 2 then condition (41:6) expresses that while
the game F may not be decomposable itself, it is strategically equivalent
to some decomposable game F'. 3
42.2.2. The above rigorous result makes it clear where the weakness
of our present arrangement lies. Decomposability is an important strategic
property and it is therefore inconvenient that of two strategically equivalent
games one may be termed decomposable without the other. It is, therefore,
1 By (27:2) in 27.1.1. Observe that the v r GS), v r 'OS) of (42:A) are the v(S), v'(S) of
(27:2) loc. cit.
* I.e. we require this not only for r, but also for its constituents A, H.
* The treatment of the constituents in 35.2.2. amounts to exactly this, as an inspec
tion of footnote 1 on p. 300 shows explicitly.
MODIFICATION OF THE THEORY 347
desirable to widen these concepts so that decomposability becomes an
invariant under strategic equivalence.
In other words: We want to modify our concept so that the transforma
tion (27:2) of 27.1.1., which defines strategic equivalence, does not interfere
with the relationship between a decomposable game F and its constituents
A and H. This relationship is expressed by (41:3):
(42:2) vr(S u T) = v A (S) + v K (T) for S c J, T c K.
Now if we use (27:2) with the same cf k for all three games F, A, H then (42:2)
is manifestly undisturbed. The only trouble is with the preliminary
condition (27:1). This states for F, A, H that
2 = 0, a fc  0,
* in / k in J k in K
respectively and while we now assume the first relation true, the two others
may fail.
Hence the natural way out is to discard (27:1) of 27.1.1. altogether.
I.e. to widen the domain of games, which we consider, by including all
those games which are strategically equivalent to zerosum ones by virtue
of the transformation formula (27:2) alone without demanding (27:1).
As was seen in 27.1.1. this amounts to replacing the functions
* " " > T n)
of the latter by new functions
3Ci(n, ' ' , r n )  JC*(n,  , r n ) + al
(The a?, , aj are no longer subject to (27:1)). The systems of
functions 3C(ri, , r n ) which are obtained in this way from the system
of functions 3Cfc(ri, , r n ) which fulfill (42:1) in 42.1. are easy to char
acterize. The characteristic is (in place of (42:1) loc. cit.) the property
(42:3) 3C;(n,  , r n ) EE .i
*i
Summing up:
(42 :B) We are widening the domain of games which we consider, by
passing from the zerosum games to the constantsum games.*
At the same time, we widen the concept of strategic equivalence,
>
1 8 is an arbitrary constant = 0. In the transformation (27:2) which produces this
<
game from a zerosum one, there is obviously
1 This gives a precise meaning to the statement at the beginning of 42.1. according to
which we are not yet prepared to consider all games unrestrictedly.
348 COMPOSITION AND DECOMPOSITION OF GAMES
introduced in 27.1.1., by defining it again by transformation
(27:2) loc. cit., but dropping the condition (27:1) eod.
42.2.3. It is essential to recognize that our above generalizations do not
alter our main ideas on strategic equivalence. This is best done by con
sidering the following two points.
First, we stated in 25.2.2. that we proposed to understand all quanti
tative properties of a game by means of its characteristic function alone.
One must realize that the reasons for this are just as good in our present
domain of constantsum games as in the original (and narrower) one of
zerosum games. The reason is this:
(42 :C) Every constantsum game is strategically equivalent to a
zerosum game.
Proof: The transformation (27:2) obviously replaces the a of (42:3)
n
above by s + aj. Now it is possible to choose the aj, , aj so
*i
n
as to make this s + J) = 0, i.e. to carry the given constantsum game
Jbi
into a (strategically equivalent) zerosum game.
Second, our new concept of strategic equivalence was only necessary
for the sake cf the new (nonzerosum) games that we introduced. For the
old (zerosum) games it is equivalent to the old concept. In other words :
If two zerosum games obtain from each other by means of the transforma
tion (27:2) in 27.1.1., then (27:1) is automatically fulfilled. Indeed, this
was already observed in footnote 2 on p. 246.
42.3. The Characteristic Function in the New Theory
42.3.1. Given a constantsum game F" (with the 3C(ri, , r n )
fulfilling (42:3)), we could introduce its characteristic function v'(S) by
repeating the definitions of 25.1.3. 1 On the other hand, we may follow
the procedure suggested by the argumentation of 42.2.2., 42.2.3.: We can
obtain F' with the functions 3Cl(ri, , r n ) from a zerosum game F
with the functions JCjb(ri, , r w ) as in 42.2.2., i.e. by
(42:4) jei(n, , r n ) s 3C*(r,,  , r n ) + <tf
with appropriate otj, , aj (cf. footnote 1 on p. 246), and then define the
characteristic function v'(S) of T f by means of (27:2) in 27.1.1., i.e. by
(42:5) v'(S) ^ V (5) + aj.
kinS
1 The whole arrangement of 25.1.3. can be repeated literally, although T is no longer
zerosum, with two exceptions. In (25:1) and (25:2) of 25.1.3. we must add A to the
extreme right hand term. (This is so, because we now have (42:3) in place of (42.1).)
This difference is entirely immaterial.
MODIFICATION OF THE THEORY 349
Now the two procedures are equivalent, i.e. the v'(/S) of (42:4), (42:5)
coincides with the one obtained by the reapplication of 26.1.3. Indeed, an
inspection of the formulae of 25.1.3. shows immediately, that the substitu
tion of (42:4) there produces the result (42:5). l * f
42.3.2. v(S) is a characteristic function of a zerosum game, if and only
if it fulfills the conditions (25:3:a)(25:3:c) of 25.3.1., as was pointed out
there and in 26.2. (The proof was given in 25.3.3. and 26.1.) What do
these conditions become in the case of a constantsum game?
In order to answer this question, let us remember, that (25:3:a)(25:3:c)
loc. cit. imply (25:4) in 25.4.1. Hence, we can add (25:4) to them, and
modify (25:3:b) by adding v(7) to its right hand side (this is no change
owing to (25:4)). Thus the characterization of the v(S) of all zerosum
games becomes this:
(42:6:a) v() = 0,
(42:6:b) v(S) + v(S) = v(7),
(42:6:c) v(5) + v(T) g v(S u T) if S n T = 0,
and
(42:6:d) v(7) = 0.
Now the v'(S) of all constantsum games obtain from these v(/S) by subject
ing them to the transformation (42:5) of 42.3.1. How does this transforma
tion affect (42:6:a)(42:6:d)?
One verifies immediately, that (42:6:a)(42:6:c) are entirely unaffected,
while (42:6:d) is completely obliterated. 3 So we see:
(42 :D) v(S) is the characteristic function of a constantsum game
if and only if it satisfies the conditions (42:6:a)(42:6:c).
(We write from now on v(S) for v'GS)).
As mentioned above, (42:6:d) is no longer valid. However, we have
(42:6:d*) v(7) = s.
Indeed, this is clear from (42:3), considering the procedure of 25.1.3. It
can also be deduced by comparing footnote 1 on p. 347 and footnote 3
above (our v(S) is the v'(S) there). Besides (42:6:d*) is intuitively clear:
A coalition of all players obtains the fixed sum s of the game.
1 The verbal equivalent of this consideration is easily found.
1 Had we decided to define v'(*S>) by means of (42:2), (42:5) only, a question of ambi
guity would have arisen. Indeed: A given constantsum game r' can obviously be
obtained from many different zerosum games r by (42:4), will then (42:5) always yield
the same v ; (5)?
It would be easy to prove directly that this is the case. This is unnecessary, how
ever, because we have shown that the v'(S) of (42:5) is always equal to that one of
25.1.3. and that v'(S) is defined unambiguously, with the help of F' alone.
n
1 According to (42:5), the right hand side of (42:6:d) goes over into ?> i e ^ ?
t in / t  1
and this sum is completely arbitrary.
350 COMPOSITION AND DECOMPOSITION OF GAMES
42.4. Imputations, Domination, Solutions in the New Theory
42.4*1. From now on, we are considering characteristic functions of any
constantsum game, i.e. functions v(S) subject to (42:6:a)(42:6:c) only.
Our first task in this wider domain, is naturally that of extending to it
the concepts of imputations, dominations, and solutions as defined in 30.1.1.
Let us begin with the distributions or imputations. We can take over
from 30.1.1. their interpretation as vectors
>
a = {ai, ,).
Of the conditions (30:1), (30:2) eod. we may conserve (30:1):
(42:7) a, v((t))
unchanged the reasons referred to there 1 are just as valid now as then.
(30:2) eod., however, must be modified. The constantsum of the game
being s (cf. (42:3) and (42:6:d*) above), each imputation should distribute
this amount i.e. it is natural to postulate
(42:8)
By (42:6:d*) this is equivalent to
(42:8*) a, = v(/).
i
The definitions of effectivity, domination, solution we take over unchanged
from 30.1. 1. 3 the supporting arguments brought forward in the discussions
which led up to those definitions, appear to lose no strength by our present
generalization.
42.4.2. These considerations receive their final corroborations by observ
ing this:
(42 :E) For our new concept of strategic equivalence of constantsum
games F, F', 4 there exists an isomorphism of their imputations,
i.e. a onetoone mapping of those of F on those of F', which
leaves the concepts of 30.1. 1. 5 invariant.
This is an analogue of (31 :Q) in 31.3.3. and it can be demonstrated in
the same way. As there, we define the correspondence
(42:9) 7^7'
1 a < v((t)) would be unacceptable, cf. e.g. the beginning of 29.2.1.
2 For the special case of a zerosum game v(7) = so (42:8), (42:8*) coincide as
they must with (30:2) loc. cit.
'I.e. (30:3); (30:4:a)(30:4:c); (30:5:a), (30:5:b), or (30:5:c) loc. cit., respectively.
4 As defined at the end of 42.2.2., i.e. by (27:2) in 27. 1.1., without (27:1) eod.
As redefined in 42.4.1.
MODIFICATION OF THE THEORY 351
between the imputations a = \a h , a n j of F and the imputations
"?' = K  , a' n ] of I" by
(42:10) a' k = a k + J
where the a?,  , a are those of (27:2) in 27.1.1.
Now the proof of (31 :Q) in 31.3.3. carries over almost literally. The one
difference is that (30:2) of 30.1.1. is replaced by our (42:8) but since (27:2)
n
in 27.1.1. gives v'(/) = v(7) + <*?, this too takes care of itself. 1 The
ti
reader who goes over 31.3. again, will see that everything else said there
applies equally to the present case.
42.5. Essentiality, Inessentiality, and Decomposability in the New Theory
42.6.1. We know from (42 :C) in 42.2.3. that every constantsum game is
strategically equivalent to a zerosum game. Hence (42 :E) allows us to
carry over the general results of 31. from the zerosum games to the constant
sum ones always passing from the latter class to the former one by strategic
equivalence.
This forces us to define inessentiality for a constantsum game by
strategic equivalence to an inessential zerosum game. We may state
therefore:
(42 :F) A zerosum game is inessential if and only if it is strategically
equivalent to the game with v(S) = 0. (Cf. 23.1.3. or (27 :C)
in 27.4.2.) By the above, the same is true for a constantsum
game. (But we must use our new definitions of inessentiality
and of strategic equivalence.)
Essentiality is, of course, defined as negation of inessentiality.
Application of the transformation formula (42:5) of 42.3.1. to the
criteria of 27.4. shows, that there are only minor changes.
(27:8) in 27.4.1. must be replaced by
(42:11)
since the right hand side of this formula is invariant under (42:5) and it
goes over into (27:8) loc. cit. for v(I) = (i.e. the zerosum case).
The substitution of (42:11) for (27:8) necessitates replacement of the
on the right hand side of both formulae in the criterion (27 :B) of 27.4.1. by
n
1 And this was the only point in the proof referred to, at which 2, S  (i.e. (27:1)
ti
in 27.1.1., which we no longer require) is used.
352 COMPOSITION AND DECOMPOSITION OF GAMES
v(7). The criteria (27:C), (27:D) of 27.4.2. are invariant under (42:5),
and hence unaffected.
42.5.2. We can now return to the discussion of composition and decom
position in 41.3.41.4., in the wider domain of all constantsum games.
All of 41.3. can be repeated literally.
When we come to 41.4.,, the question (41 :A) formulated there again
presents itself. In order to determine whether any postulates beyond
(41:6), i.e. (41:7) of 41.3.2. are now needed, we must investigate (42:6:a)
(42:6:c) in 42.3.2., instead of (25:3:a)(25:3:c) in 25.3.1. (for all three of
v r (/Z), v A (S), v,(I*)).
(42:6:a), (42:6:c) are immediately disposed of, exactly as (25:3:a),
(25:3:c) in 41.4. As to (42:6:b), the proof of (25:3:b) in 41.4. is essentially
applicable, but this time no extra condition arises (like (41:8) or (41:9)
loc. cit.). To simplify matters, we give this proof in full.
Ad (42:6:b): We will derive its validity for v A (S) and v H (T) from the
assumed one for vrCR). By symmetry it suffices to consider
The relation to be proven is
(42:12) v A (S) + v A (J  S) =
By (41:4) this means
(42:12*) VrOS) + v r (J  S) = v r (J).
To prove (42:12*) apply (42:6:b) for v r (fl) to R = /  S and R = J.
For these I  R = SuK and I  R = K, respectively. So (42:12*)
becomes
Vr(S) + v r (J)  Vr(S u K) = v r (7)  v r (X),
i.e.
v r (S u K) = Vr(S) + v r (X),
and this is the special case of (41:6) with T = K.
Thus we have improved upon the result (41 :C) of 41.4. as follows:
(42 :G) In the domain of all constantsum games the game F is
decomposable with respect to the sets J and K (cf. 41.3.2.) if
and only if it fulfills the condition (41:6), i.e. (41:7).
42.5.3. Comparison of (41 :C) in 41.4. and of (42 :G) in 42.5.2. shows that
the passage from zerosum to constantsum games rids us of the unwanted
condition (41:8), i.e. (41:9) for decomposability.
Decomposability is now defined by (41:6), i.e. (41:7) alone, and it is
invariant under strategic equivalence as it should be.
We also know that when a game F is decomposed into two (constituent)
games A and H (all of them constantsum only!), we can make all these
games zerosum by strategic equivalence. (Cf. (42 :C) in 42.2.3. for F,
and then (42 :A) in 42.2.1. et sequ. for A, H.)
THE DECOMPOSITION PARTITION 363
Thus we can always use one of the two domains of games zerosum
or constantsum whichever is more convenient for the problem just under
consideration.
In the remainder of this chapter we will continue to con
sider constantsum games, unless the opposite is explicitly stated.
43. The Decomposition Partition
43.1. Splitting Sets. Constituents
43.1. We defined the decomposability of a game F not per se, but with
respect to a decomposition of the set / of all players into two complementary
sets, J, K.
Therefore it is feasible to take this attitude: Consider the game F
as given, and the sets J ', K as variable. Since J determines K (indeed
K = 7 J), it suffices to treat J as the variable. Then we have this
question:
Given a game F (with the set of players /) for which sets J s / (and the
corresponding K = / J) is F decomposable?
We call those J(Q I) for which this is the case the splitting sets of T.
The constituent game A which obtains in this decomposition (cf. 41.2.1.
and (41:4) of 41.3.2.) is the J constituent of F. 1
A splitting set J is thus defined by (41:6), i.e. (41:7) in 41.3.2., where
K = 7 J must be substituted.
The reader will note that this concept has a very simple intuitive mean
ing: A splitting set is a self contained group of players, who neither influence,
nor are influenced by, the others as far as the rules of the game are concerned.
43.2. Properties of the System of All Splitting Sets
43.2.1. The totality of all splitting sets of a given game is characterized
by an aggregate of simple properties. Most of these have an intuitive
immediate meaning, which may make mathematical proof seem unnecessary.
We will nevertheless proceed systematically and give proofs, stating the
intuitive interpretations in footnotes. Throughout what follows we write
v(S) for v r (S) (the characteristic function of F).
(43: A) J is a splitting set if and only if its complement K = / J
is one. 2
Proof: The decomposability of F involves J and K symmetrically.
(43 :B) and / are splitting sets. 8
Proof: (41:6) or (41:7) with J = , K = / are obviously true, as
v(0) = 0.
1 By the same definition the game H (cf. 41.2.1. and (41:5) in 41.3.2.) is then the
Kconstituent (K  /  J) of T.
1 That a set of players is selfcontained in the sense of 43.1., is clearly the same state
ment, as that the complement is selfcontained.
1 That these are selfcontained is tautological.
354 COMPOSITION AND DECOMPOSITION OF GAMES
43.2.2.
(43 :C) /' n J" and J' u J" are splitting sets if J', J" are. 1
Proof: Ad J' u J": As J', J" are splitting sets, we have (41:6) for J, K
equal to J', 7  J' and J", /  J". We wish to prove it for J f u J",
I  (J'u J"). Consider therefore two S J 1 u J", T 7  (J 9 u /")
Let S' be the part of S in J', then S" = S S' lies in the complement of
J', and as S J' u J", S" also lies in J". So S = S' + S", S' c J',
S" S /". Now S' j', S" c /  J' and (41 :6) for J', I  J' give
(43:1) v(5) = v(S') + v(S").
Next S" 7  J' and T s I  (J' u J") s 7  J' so S"uTsIJ'.
Also S' c j'. Clearly S' u (S" u T) = S u T. Hence (41 :6) for J', 7  J'
also gives
(43:2) v(S u T) = v(S') + v(S" u 7 1 ).
Finally S"<=J" and Ts 7  (J' u J") 7  J". Hence (41:6) for
J", 7  J" gives
(43:3) v(S" u T 7 ) = v(S") + v(T).
Now substitute (43 :3) into (43 :2) and then contract the right hand side
by (43:1). This gives
v(S u T) = v(5) + v(T),
which is (41:6), as desired.
Ad J' n J": Use (43 :A) and the above result. As J', J" are splitting
sets, the same obtains successively for 7 J', I J", (I J') u (7 /")
which is clearly 7 (/' n 7") 2 , and J' n /" the last one being the desired
expression.
43.3. Characterization of the System of All Splitting Sets. The Decomposition Partition
43.3.1. It may be that there exist no other splitting sets than the trivial
ones , 7 (cf. (43 :B) above). In that case, we call the game F indecompos
able* Without studying this question any further, 4 we continue to
investigate the splitting sets of F.
1 The intersection J' n J": It may strike the reader as odd, that two selfcontained
sets J', J" should have a nonempty intersection at all. This is possible, however, as the
example /' J" shows. The deeper reason is that a selfcontained set may well be the
sum of smaller selfcontained sets (proper subsets). (Cf. (43 :H) in 43.3.) Our present
assertion is that if two selfcontained sets J', /" have a nonempty intersection /' n /",
then this intersection is such a selfcontained subset. In this form it will probably appear
plausible.
The sum /' U J": That the sum of two selfcontained sets will again be selfcontained
stands to reason. This may be somewhat obscured when a nonempty intersection
/' n J" exists, but this case is really harmless as discussed above. The proof which
follows is actually primarily an exact account of the ramifications of just this case.
1 The complement of the intersection is the sum of the complements.
1 Actually most games are indecomposable; otherwise the criterion (42 :G) in 42.5.2.
requires the restrictive equations (41 :6), (41 :7) in 41.3.2.
4 Yetl Cf. footnote 3 and its references.
THE DECOMPOSITION PARTITION 355
(43 :D) Consider a splitting set J of F and the /constituent A of F.
Then a J' J is a splitting set of A if and only if it is one of F. 1
Proof: Considering (41:4), J' is a splitting set of A by virtue of (41:6)
when
(43:4) v(S u T) = v(S) + v(7) for 8 J', T fi J  J'.
(We write v(S) for v r (/S)). Again by (41 :6) J' is a splitting set of F when
(43:5) v(S u T) = v(S) + v(T) for S J', J 1 c /  J'.
We must prove the equivalence of (43:4) and (43:5). As J J, so
(43:4) is clearly a special case of (43:5) hence we need only prove that
(43:4) implies (43:5).
Assume, therefore, (43:4). We may use (41 :6) for F with J, K = / J.
Consider two S J', T s 7  J'. Let T f be the part of T in J, then
T' ' = T  T' lies in 7  J. So T = T' u T", T f c J, T" c /  J and
(41:6) for F with J, I  J give
(43:6) v(r) = v(2") + v(T").
Next SzJ'zJ and T 7 ' c J so SuT'sJ. Also T" /  J. Clearly
(S u T ; ) u T 77 ' = S u T. Hence (41 :6) for F with J, 7  J also gives
(43:7) v(S u T) = v(5 u T') + v(T").
Finally Scj' and T'&IJ' and T'cJ, so T'sJJ'. Hence
(43:4) gives
(43:8) v(S u T') = v(S) + v(2").
Now substitute (43 :8) into (43 :7) and then contract the right hand side
by (43:6). This gives precisely the desired (43:5).
43.3.2. (43 :D) makes it worth while to consider those splitting sets J,
for which J ^ 0, but no proper subset J 1 Q of J is a splitting set. We
call such a set J, for obvious reasons, a minimal splitting set.
Consider our definitions of indecomposability and of minimality.
(43 :D) implies immediately:
(43 :E) The Jcomponent A (of F) is indecomposable if and only if
J is a minimal splitting set.
The minimal splitting sets form an arrangement with very simple
properties, and they determine the totality of all splitting sets. The
statement follows:
(43 :F) Any two different minimal splitting sets are disjunct.
(43 :G) The sum of all minimal splitting sets is 7.
1 To be selfcontained within a selfcontained set, is the same thing as to be such in
the original (total) set. The statement may seem obvious; that it is not so, will appear
from the proof.
356 COMPOSITION AND DECOMPOSITION OF GAMES
(43 :H) By forming all sums of all possible aggregates of minimal
splitting sets, we obtain precisely the totality of all splitting
sets. 1
Proof: Ad (43 :F) : Let /', J" be two minimal splitting sets which are
not disjunct. Then J' n J" 7* Q is splitting by (43 :C), as it is J' and
S J". So the minimality of J' and J" implies that J r n J" is equal to
both J 1 and J 11 . Hence /' = J".
Ad (43 :G) : It suffices to show that every k in / belongs to some minimal
splitting set.
There exist splitting sets which contain the player k (i.e. /) ; let J be the
intersection of all of them. J is splitting by (43 :C). If J were not minimal,
then there would exist a splitting set J' ?* , J, which is J. Now
J" = J  J' = J n (/  J') is also a splitting set by (43 :A), (43 :C), and
clearly also J" j* , J. Either J' or J" = / J 1 must contain k say
that J 1 does. Then J' is among the sets of which J is the intersection.
Hence J 1 a J. But as J' J and J' 7* J, this is impossible.
Ad (43 :H): Every sum of minimal splitting sets is splitting by (43 :C),
so we need only prove the converse.
Let K be a splitting set. If J is minimal splitting, then J n K is splitting
by (43 :C), also J n K J hence either JnK = QoTjnK = J. In
the first case /, K are disjunct, in the second J K. So we see:
(43:1) Every minimal splitting set J is either disjunct with K or
Let K' be the sum of the former J, and K" the sum of the latter. K' u K 11
is the sum of all minimal splitting sets, hence by (43 :G)
(43:9) K'uK" = 7.
By their origin K ' is disjunct with K , and K" is K. I.e.
(43:10) #'/#, X"K
Now (43:9), (43:10) together necessitate K" = K\ hence If is a sum of a
suitable aggregate of minimal sets, as desired.
43.3.3. (43 :F), (43 :G) make it clear that the minimal splitting sets
form a partition in the sense of 8.3.1., with the sum 7. We call this the
decomposition partition of T, and denote it by H r . Now (43 :H) can be
expressed as follows:
(43 :H*) A splitting set Ksl is characterized by the following
property: The points of each element of II r go together as
far as K is concerned i.e. each element of II r lies completely
inside or completely outside of K.
1 The intuitive meaning of these assertions should be quite clear. They characterize
the structure of the maximum possibilities of decomposition of F in a plausible way.
THE DECOMPOSITION PARTITION 357
Thus n r expresses how far the decomposition of T in / can be pushed,
without destroying those ties which the rules of T establish between players. 1
By virtue of (43 :E) the elements of II r are also characterized by the fact
that they decompose F into indecomposable constituents.
43.4. Properties of the Decomposition Partition
43.4.1. The nature of the decomposition partition Ilr being established,
it is natural to study the effect of the fineness of this partition. We wish to
analyze only the two extreme possibilities: When II r is as fine as possible,
i.e. when it dissects / down to the oneelement sets and when Er is as coarse
as possible, i.e. when it does not dissect I at all. In other words: In the first
case Hr is the system of all oneelement sets (in 7) in the second case Ilr
consists of / alone.
The meaning of these two extreme cases is easily established:
(43 :J) n r is the system of all oneelement sets (in 7) if and only if
the game is inessential.
Proof: It is clear from (43 :H) or (43 :H*) that the stated property of
H r is equivalent of saying that all sets J(s. 7) are splitting. I.e. (by 43.1.)
that for any two complementary sets J and K(= I J) the game F is
decomposable. This means that (41:6) holds in all those cases. This
implies, however, that the condition imposed by (41 :6) on S, T (i.e. S Si J,
T K) means merely that S, T are disjunct. Thus our statement becomes
v(S u T) = v(S) + v(!T) for S n T =
Now this is precisely the condition of inessentiality by (27 :D) in 27.4.2.
(43 :K) n r consists of 7 if and only if the game F is indecomposable.
Proof: It is clear from (43:H) (or (43:H*)), that the stated property of
Ilr is equivalent to saying that 0, 7 are the only splitting sets. But this
is exactly the definition of indecomposability at the beginning of 43.3.
These result^ show that indecomposability and inessentiality are two
opposite extremes for a game. In particular, inessentiality means that the
decomposition of F, described at the end of 43.3., can be pushed thrdugh
to the individual players, without ever severing any tie that the rules of the
game F establish. 2 The reader should compare this statement with our
original definition of inessentiality in 27.3.1.
43.4.2. The connection between inessentiality, decomposability, and
the number n of players is as follows:
n = 1: This case is scarcely of practical importance. Such a game is
clearly indecomposable, 8 and it is at the same time inessential by the first
remark in 27.5.2.
1 I.e. without impairing the selfcontainedness of the resulting sets.
2 I.e. that every player is selfcontained in this game.
3 As / is a oneelement set , / are its only subsets.
358 COMPOSITION AND DECOMPOSITION OF GAMES
It should then be noted that indecomposability and inessentiality are
by (43 :J), (43 :K) incompatible when n ^ 2, but not when n = 1.
n = 2: Such a game, too, is necessarily inessential by the first remark oi
27.5.2. Hence it is decomposable.
n ^ 3: For these games decomposability is an exceptional occurrence,
Indeed, decomposability implies (41:6) with some J ^ Q, I] hence K =
/  J T* 0, /. So we can choose j in J, k in K. Then (41 :6) with S = (j),
T = (k) gives
(43:11) v((j, A;)) = v((j)) + v((fc)).
Now the only equations which the values of v(S) must satisfy, are (25:3:a),
(25:3:b) of 25.3.1. (if zerosum games are considered) or (42:6:a), (42:6:bj
of 42.3.2. (43:11) is neither of these, since only the sets (j), (fc), (j, k]
occur in (43:11) and these are none of the sets occurring in those equations
i.e. or 7 or complements as n ^ 3. 1 Thus (43:11) is an extra condition
which is not fulfilled in general.
By the above an indecomposable game cannot have n = 2 hence it has
n = 1 or n ^ 3. Combining this with (43 :E), we obtain the following
peculiar result:
(43 :L) Every element of the decomposition partition Tlr is either a
oneelement set, or else it has n ^ 3 elements.
Note that the oneelement sets in II r are the oneelement splitting sets'
i.e. they correspond to those players who are selfcontained, separatee
from the remainder of the game (from the point of view of the strategy oi
coalitions). They are the " dummies " in the sense of 35.2.3. and footnote 1
on p. 340. Consequently, our result (43 :L) expresses this fact : Those playen
who are not "duYnmies," are grouped in indecomposable constituent game*
of n ^ 3 players each.
This appears to be a general principle of social organization.
44. Decomposable Games. Further Extension of the Theory
44.1. Solutions of a (Decomposable) Game and Solutions of Its Constituents
44.1. We have completed the descriptive part of our study of composi
tion and decomposition. Let us now pass to the central part of the problem
The investigation of the solutions in a decomposable game.
Consider a game T which is decomposable for J and / J = K, wit!
the J and /^constituents A and H. We use strategic equivalence, as
explained at the beginning of 42.5.3., to make all three games zerosum.
Assume that the solutions for A as well as those for H are known; doei
this then determine the solution for T? In other words: How do th<
solutions for a decomposable game obtain from those for its constituents?
Now there exists a surmise in this respect which appears to be the prime
facie plausible one, and we proceed to formulate it.
1 For n = 2 it is otherwise; (j, k) /, (j) and (A;) are complements.
1 Such a splitting set is, of course, automatically minimal.
DECOMPOSABLE GAMES 359
44.2. Composition and Decomposition of Imputations and of Sets of Imputations
44.2.1. Let us use the notations of 41.3.1. But as we write v(S) for
Vr(S) this also replaces by (41:4), (41:5), v A (S), v H (5).
On the other hand, we must distinguish between imputations for
F, A, H. 1 In expressing this distinction, it is better to indicate the set of
players to whom an imputation refers, instead of the game in which they are
engaged. I.e. we will affix to them the symbols /, /, K rather than F, A, H.
In this sense we denote the imputations for I (i.e. T) by
(44:1) a j = {i>, , *>, ai, * , r},
and those for J, K (i.e. A, H) by
(44:2) 7/= {0i', ,0*},
(44:3) 7* = {7i", ' ,7r).
If three such imputations are linked by the relationship
(44.4^ *' = &' for *' = r > ' ' ' > k '>
( ** A) a,. = 7," for j" = 1", ' , I",
> > +
then we say that a / obtains by composition for /, 7 *, that /, 7 * obtain
by decomposition from a / (for J, If), and that /, 7 K are the (/, K)
constituents of a /.
Since we are now dealing with zerosum games, all these imputations
must fulfill the conditions (30:1), (30:2) of 30.1.1. Now one verifies
immediately for a/, ft j, y K linked by (44:4).
Ad (30:1) of 30.1.1. : The validity of this for /, y K is clearly equivalent
to its validity for a /.
Ad (30:2) of 30.1.1.: For "/?/, ~y K this states (using (44:4))
*'
(44:5) X ** = >
t'r
i"
(44:6) a, = 0.
yi"
For a / it amounts to
*' i"
(44:7) J) <*' + , = 0.
'!' ;_!/,
1 It is now convenient to reintroduce the notations of 41.3.1. for the players.
360 COMPOSITION AND DECOMPOSITION OF GAMES
Thus its validity for /3 j, y K implies the same for a /, while its validity
for a i does not imply the same for /, y K indeed (44:7) does imply the
equivalence of (44:5) and (44:6), but it fails to imply the validity of either
one.
So we have:
(44: A) Any two imputations ft /, y K can be composed to an a/,
while an imputation a / can be decomposed of two /, y K if and
only if it fulfills (44:5), i.e. (44:6).
We call such an a / decomposable (for J, K).
44.2.2. This situation is similar to that, which prevails for the games
themselves: Composition is always possible, while decomposition is not.
Decomposability is again an exceptional occurrence. 1
It ought to be noted, finally, that the concept of composition of imputa
tions has a simple intuitive meaning. It corresponds to the same operation
of " vie wing as one" two separate occurrences, which played the correspond
ing role for games in 41.2.1., 41.2.3., 41.2.4. Decomposition of an a /
(into ft /, y K) is possible if and only if the two selfcontained sets of players
J, K are given by the sets of imputations a / precisely their " just dues"
which are zero. This is the meaning of the condition (44 :A) (i.e. of (44:5),
(44:6)).
44.2.3. Consider a set V/ of imputations / and a set W/c of imputations
> >
y K. Let U/ be the set of those imputations a / which obtain by composition
>
of all ft j in V/ with all 7 K in Wx. We then say that U/ obtains by compo
sition from V/, Wxi that V/, W* obtain by decomposition from U/ (for
JT, K\ and that V/, W* are the (J, K) constituents of U/.
Clearly the operation of composition can always be carried out, what
ever V/t WK whereas a given U/ need not allow decomposition (for J, K).
If U/ can be decomposed, we call it decomposable (for J, K).
Note that this decomposability of U/ restricts it very strongly; it implies,
among other things that all elements a / of U/ must be decomposable (cf .
the interpretation at the end of 44.2.2.).
In order to interpret these concepts for the sets of imputations U/ f V/,
Wx more thoroughly, it is convenient to restrict ourselves to solutions
of the games r, A, H.
1 There are great technical differences between the concepts of decomposability etc.,
for games and for imputations. Observe, however, the analogy between (41:4), (41:5)
in 41.3.2.; (41:8), (41:9), (41 :1(V) in 41. 4.2.; and our (44:4), (44:5), (44:6), (44:7).
DECOMPOSABLE GAMES 361
44.3. Composition and Decomposition of Solutions.
The Main Possibilities and Surmises
44.3.1. Let V/f W* be two solutions for the games A, H respectively.
Their composition yields an imputation set U/ which one might expect
to be a solution for the game F. Indeed, U/ is the expression of a standard
of behavior which can be formulated as follows. We give the verbal formula
tion in the text under (44:B:a)(44:B:c), stating the mathematical equiva
lents in footnotes, which, as the reader will verify, add up precisely to our
definition of composition.
(44:B:a) The players of J always obtain together exactly their
"just dues" (zero), and the same is true for the players of K. 1
(44:B:b) There is no connection whatever between the fate of play
ers in the set / and in the set K. 2
(44:B:c) The fate of the players in J is governed by the standard of
behavior Vj> 3 the fate of the players in K is governed by the
standard of behavior W*. 4
If the two constituent games are imagined to occur absolutely separate
from each other, then this is the plausible way of viewing their separate
solutions Vy, W/c as one solution U/ of the composite game F.
However, since a solution is an exact concept, this assertion needs a
proof. I.e. we must demonstrate this:
(44 :C) If V/, W* are solutions of A, H, then their composition U/ is
a solution of F.
44.3.2. This, by the way, is another instance of the characteristic
relationship between common sense and mathematical rigour. Although
an assertion (in the present case that U/ is a solution whenever V/, W* are)
is required by common sense, it has no validity within the theory (in this
case based on the definitions of 30.1.1.) unless proved mathematically. To
this extent it might seem that rigour is more important than common sense.
This, however, is limited by the further consideration that if the mathe
matical proof fails to establish the common sense result, then there is a
strong case for rejecting the theory altogether. Thus the primate of the
mathematical procedure extends only to establish checks on the theories
in a way which would not be open to common sense alone.
1 Every element a / of U/ is decomposable.
* Any / which is used in forming U/ and any 7 *, which is used in forming U/, give
>
by composition an element a / of U/.
* The above mentioned / are precisely the elements of V/.
4 The above mentioned y K are precisely the elements of W/c.
362 COMPOSITION AND DECOMPOSITION OF GAMES
It will be seen that (44 :C) is true, although not trivial.
One might be tempted to expect that the converse of (44 :C) is also
true, i.e. to demand a proof of this:
(44 :D) If U/ is a solution of F, then it can be decomposed into solu
tions V,, W* of A, H.
This is prima facie quite plausible: Since F is the composition of what
are for all intents and purposes two entirely separate games, how could any
solution of F fail to exhibit tjiis composite structure?
The surprising fact is, however, that (44 :D) is not true in general.
The reader might think that this should induce us to abandon or at least
to modify materially our theory (i.e. 30.1.1.) if we take the above method
ological statement seriously. Yet we will show, that the " common sense "
basis for (44 :D) is quite questionable. Indeed, our result, contradicting
(44 :D) will provide a very plausible interpretation which connects it
successfully with well known phenomena in social organizations.
44.3.3. The proper understanding of the failure of (44 :D) and of the
validity of the theory which replaces it, necessitates rather detailed con
siderations. Before we enter upon these, it might be useful to make, in
anticipation, some indications as to how the failure of (44 :D) occurs.
It is natural, to split (44 :D) into two assertions:
(44:D:a) If U/ is a solution of F, then it is decomposable (for J, K).
(44:D:b) If a solution U/ of F is decomposable (for /, K), then its
constituents V/, W/c are solutions for A, H.
Now it will appear that (44:D:b) is true, and (44:D:a) is false. I.e.
it can happen that a decomposable game F possesses an indecomposable
solution. 1
However, the decomposability of a solution (or of any set of imputations)
is expressed by (44:B:a)(44:B:c) in 44.3.1. So one or more of these condi
tions must fail for the indecomposable solution referred to above. Now
it will be seen (cf. 46.11.) that the condition which is not satisfied is (44:B:a).
This may seem to be very grave, because (44:B:a) is the primary condition
in the sense that when it fails, the conditions (44:B:b), (44:B:c) cannot
even be formulated.
The concept of decomposition possesses a certain elasticity. This
appeared_jn 42.2.1., 42.2.2. and 42.5.2., where we succeeded in ridding
ourselves of an inconvenient auxiliary condition connected with the decom
posability of a game by modifying that concept. It will be seen that our
difficulties will again be met by this procedure so that (44 :D) will be
replaced by a correct and satisfactory theorem. Hence we must aim at
modifying our arrangements, so that the condition (44 :B :a) can be discarded.
We will succeed in doing this, and then it will appear that conditions
(44:B:b), (44:B:c) make no difficulties and that a complete result can be
obtained.
1 This is similar to the phenomenon that a symmetric game may possess an asym
metric solution. Cf. 37.2.1.
DECOMPOSABLE GAMES 383
44.4. Extension of the Theory. Outside Sources
44.4.1. It is now time to discard the normalization which we introduced
(temporarily) in 44.1.: That the games under consideration are zerosum.
We return to the standpoint of 42.2.2. according to which the games are
constantsum.
These being understood, consider a game F which is decomposable
(for J, K) with J, Kconstituents A, H.
The theory of composability and decomposability of imputations, as
given in 44.2.1., 44.2.2. could now be repeated with insignificant changes.
(44:l)(44:4) may be taken over literally, while (44:5)(44:7) are only
modified in their right hand sides. Since (30:2) of 30.1.1. has been replaced
by (42:8*) of 42.4.1. those formulae (44:5)(44:7) now become:
k'
(44:5*) a, = v(J),
i'r
(44:6*) rv(K) f
/'!"
and
(44:7*)  , + or = v(7) = v(J) + v(K).
i'i' /'i"
(The last equation on the right hand side by (42:6:b) in 42.3.2., or equally
by (41:6) in 41.3.2. with S = /, T = K.) The situation is exactly as in
44.2.1., indeed, it really arises from that one by the isomorphism of 42.4.2.
Thus a i fulfills (44:7*), but for its decomposability (44:5*), (44:6*) are
needed and (44:7*) does imply the equivalence of (44:5*) and (44:6*),
but it fails to imply the validity of either.
So the criterion of decomposability (44 :A) in 44.2.1. is again true, only
with our (44:5*), (44:6*) in place of its (44:5), (44:6). And the final con
clusion of 44.2.2. may be repeated: Decomposition of an a / (into ft /, y K )
is possible if and only if the two self contained sets of players J, K are
given by this imputation a / precisely their just dues which are now v(J),
Since we know that this limitation of the decomposability of imputa
tions the reason for (44:B:a) in 44.3.1. is a source of difficulties, we have
to remove it. This means removal of the conditions (44:5*), (44:6*), i.e.
of the condition (42:8*) in 42.4.1. from which they originate.
44.4.2. According to the above, we will attempt to work the theory of a
constantsum game F with a new concept of imputations, which is based on
(42:7) of 42.4.1. (i.e. on (30:1) of 30.1.1.) alone, without (42:8*) in 42.4.1.
In other words 2
1 Instead of zero, as loc. cit.
* We again denote the players by 1, , n.
364 COMPOSITION AND DECOMPOSITION OF GAMES
An extended imputation is a system of numbers ai, , a n with this
property:
(44:8) <* ^ v((t)) for i = I, , n.
n
We impose no conditions upon . We view these extended imputa
tions, too, as vectors
44.4.3. It will now be necessary to reconsider all our definitions which are
rooted in the concepts of imputation i.e. those of 30.1.1. and 44.2.1. But,
before we do this, it is well to interpret this notion of extended imputations.
The essence of this concept is that it represents a distribution of certain
amounts between the players, without demanding that they should total
up to the constant sum of the game F.
Such an arrangement would be extraneous to the picture that the players
are only dealing with each other. However, we have always conceived of
imputations as a distributive scheme proposed to the totality of all players.
(This idea pervades, e.g. all of 4.4., 4.5.; it is quite explicit in 4.4.1.) Such
a proposal may come from one of the players, 1 but this is immaterial. We
can equally imagine, that outside sources submit varying imputations to
the consideration of the players of F. All this harmonizes with our past
considerations, but in all this, those "outside sources" manifested them
selves only by making suggestions without contributing to, or withdrawing
from, the proceeds of the game.
44.5. The Excess
44.5.1. Now our present concept of extended imputations may be taken
to express that the " outside sources " can make suggestions which actually
involve contributions or withdrawals, i.e. transfers. For the extended
imputation a = {i, ,} the amount of this transfer is
n
(44:9) e = % a,  v(/)
i
and will be called the excess of a . Thus
e > for a contribution,
(44:10) e = if no transfer takes place,
e < for a withdrawal.
1 Who tries to form a coalition. Since we consider the entire imputation as his
proposal, this necessitates our assuming that he is even making propositions to those
players, who will not be included in the coalition. To these he may offer their respective
minima v((t)) (possibly more, cf. 38.3.2. and 38.3.3.). There may also be players in
intermediate positions "between included and excluded" (cf. the second alternative in
37.1.3.). Of course, those less favored players may make their dissatisfaction effective,
this leads to the concept of domination, etc.
DECOMPOSABLE GAMES 365
It will be necessary to subject this to certain suitable limitations, in
order to obtain realistic problems; and we will take due account of this.
It is important to realize how these transfers interact with the game.
The transfers are part of the suggestions made from outside, which are
accepted or rejected by the players, weighed against each other, according
to the principles of domination, etc. 1 In the course of this process, any
dissatisfied set of players may fall back upon the game F, which is the
sole criterion of the effectivity of their preference of their situation in one
(extended) imputation against another. 2 Thus the game, the physical
background of the social process under consideration, determines the
stability of all details of the organization but the initiative comes through
the outside suggestions, circumscribed by the limitations of the excess
referred to above.
44.5.2. The simplest form that this "limitation" of the excess can take,
consists in prescribing its value e explicitly. In interpreting this prescrip
tion, (44:10) should be remembered.
The situation which exists when e . may at first seem paradoxical.
This is particularly true when e < 0, i.e. when a withdrawal from outside
is attempted. Why should the players, who could fall back on a game of
constant sum v(/) accept an inferior total? I.e. how can a "standard of
behavior/' a "social order," based on such a principle be stable? There is,
nevertheless an answer: The game is only worth v(l) if all players form a
coalition, act in concert. If they are split into hostile groups, then each
group may have to estimate its chances more pessimistically and such a
division may stabilize totals that are inferior to v(/). 3
The alternative e > 0, i.e. when the outside interference consists of a
free gift, may seem less difficult to accept. But in this case too, it will be
1 This is, of course, a narrow and possibly even somewhat arbitrary description of the
social process. It should be remembered, however, that we use it only for a definite and
limited purpose: To determine stable equilibria, i.e. solutions. The concluding remarks
of 4.6.3. should make this amply clear.
2 We are, of course, alluding to the definitions of effectivity and domination, cf. 4.4.1.
and the beginning of 4.4.3. given in exact form in 30.1.1. We will extend the exact
definitions to our present concepts in 44.7.1.
3 For a first quantitative orientation, in the heuristic manner: If the players are
grouped into disjunct sets (coalitions) S\, , S p , then the total of their own valua
tions is v(Si) + + v(S P ). This is v(J) by (42:6:c) in 42.3.2.
Oddly enough, this sum is actually v(7) when p  2 by (42:6:b) in 42.3.2.
i.e. in this model the disagreements between three or more groups are the effective sources
of damage.
Clearly by (42:6:c) in 42.3.2. the above sums v(8i) + + v(S p ) are all
n
g> V v ((*')). On the other hand, this latter expression is one of them (put p n,
ti
S K (t)). So the damage is greatest when each player is isolated from all others.
n
The whole phenomenon disappears, therefore, when ^ v((i)) v(7) , i.e. when the
ti
game is inessential. (Cf. (42:11) in 42.5.1.)
366 COMPOSITION AND DECOMPOSITION OF GAMES
necessary to study the game in order to see how the distribution of this
gift among the players can be governed by stable arrangements. It has to
be expected, that the optimistic appraisal of their own chances, derived
from the possibilities of the various coalitions in which they might par
ticipate will determine the players in making their claims. The theory
must then provide their adjustment to the available total.
44.6. Limitations of the Excess.
The Nonisolated Character of a Game in the New Setup
44.6.1. These considerations indicate that the excess e must be neither
too small (when e < 0), nor too large (when e > 0). In the former case a
situation would arise where each player would prefer to fall back on the
game, even if the worst should happen, i.e. if he has to play it isolated. 1
In the latter case it will happen that the "free gift" is "too large," i.e. that
no player in any imagined coalition can make such claims as to exhaust the
available total. Then the very magnitude of the gift will act as a dissolvent
on the existing mechanisms of organizations.
We will see in 45. that these qualitative considerations are correct
and we will get from rigorous deductions the details of their operation and
the precise value of the excess at which they become effective.
44.6.2. In all these considerations the game F can no longer be con
sidered as an isolated occurrence, since the excess is a contribution or a
withdrawal by an outside source. This makes it intelligible that this
whole train of ideas should come up in connection with the decomposition
theory of the game F. The constituent games A, H are indeed no longer
entirely isolated, but coexistent with each other. 2 Thus, there is a good
reason to look at A, H in this way whether the composite game F should
be treated in the old manner (i.e. as isolated), or in the new one, may be
debatable. We shall see, however, that this ambiguity for F does not
influence the result essentially, whereas the broader attitude concerning A,
H proves to be absolutely necessary (cf. 46.8.3. and also 46.10.).
When a game F is considered in the above sense, as a nonisolated
occurrence, with contributions or withdrawals by an outside source, one
might be tempted to do this: Treat this outside source also as a player,
including him together with the other players into a larger game F'. The
rules of F' (which includes F) must then be devised in such a manner as to
provide a mechanism for the desired transfers. We shall be able to meet
this demand with the help of our final results, but the problem has some
intricacies that are better considered only at that stage.
n
1 This happens when the proposed total v(/) + e is < v((i)). As the last expres
i
sion is equal to v(7) ny (by (42:11) in 42.5.1.) this means e < ny.
We will see in 45.1. that this is precisely the criterion for e being "too small/'
' This in spite of the absence of "interactions," as far as the rules of the game are
concerned; cf. 41.2.3., 41.2.4.
DECOMPOSABLE GAMES 367
44.7. Discussion of the New Setup (i ),
44.7.1. The reconsideration of our old definitions mentioned at the
beginning of 44.4.3. is a very simple matter.
For the extended imputations we have the new definitions of 44.4.2.
The definitions of effectivity and domination we take over unchanged from
30.1. 1. 1 the supporting arguments brought forward in the discussion which
led up to those definitions appear to lose no strength by our present general
izations. The same applies to our definition of solutions eod. 2 with one
caution: The definition of a solution referred to makes the concept of a
solution dependent upon the set of all imputations in which it is formed.
Now in our present setup of extended imputations we shall have to consider
limitations concerning them notably concerning their excesses as indi
cated in 44.5.1. These restrictions will determine the set of all extended
imputations to be considered and thereby the concept of a solution.
44.7.2. Specifically we shall consider two types of limitations.
First, we shall consider the case where the value of the excess is pre
scribed. Then we have an equation
(44:11) e = eo
with a given e . The meaning of this restriction is that the transfer from
outside is prescribed, in the sense of the discussion of 44.5.2.
Second, we shall consider the case where only an upper limit of the excess
is prescribed. Then we have an inequality
(44:12) e g e
with a given e . The meaning of this restriction is that the transfer from
outside is assigned a maximum (from the point of view of the players who
receive it).
The case in which we are really interested is the first one, i.e. that one
of 44.5.2. The second case will prove technically useful for the clarification
of the first one although its introduction may at first seem artificial.
We refrain from considering further alternatives because we will be able to
complete the indicated discussion with these two cases alone.
Denote the set of all extended imputations fulfilling (44:11) (first case)
by E(eo). Considering (44:9) in 44.5.1., we can write (44:11) as
n
(44:11*) = v(7) + e .
i
Denote the set of all extended imputations fulfilling (44:12) (second case)
by F(e Q ). Considering (44:9) in 44.5.1., we can write (44:12) as
n
(44:12*) g v(7) + e,.
il
1 I.e. (30:3); (30:4:a)(30:4:c) loc. cit., respectively.
8 I.e. (30:5:a), (30:5:b) or (30:5:c) eod.
368 COMPOSITION AND DECOMPOSITION OF GAMES
For the sake of completeness, we repeat the characterization of an extended
imputation which must be added to (44:11*), as well as to (44:12*):
(44:13) a, v((0), for i = 1,  , n.
Note that the definitions of (44:9) as well as (44:11*), (44:12*) and (44:13)
are invariant under the isomorphism of 42.4.2.
44.7.3. Now the definition of a solution can be taken over from 30.1.1.
Because of the central role of this concept we restate that definition, adjusted
to the present conditions. Throughout the definition which follows, E(e^)
can be replaced by F(e<>), as indicated by [ ].
A set V E(eo) [F(ed)] is a solution for E(e Q ) [F(eo)] if it possesses the follow
ing properties:
(44:E:a) No in V is dominated by an a in V.
(44:E:b) Every /} of E(e*)[F(e 9 )] not in V is dominated by some a
inV.
(44:E:a) and (44:E:b) can be stated as a single condition:
(44:E:c) The elements of V are those elements of E(e<>) [F(eo)] which
are undominated by any element of V.
It will be noted that E(0) takes us back to the original 30.1.1. (zerosum
game) and 42.4.1. (constantsum game).
44.7.4. The concepts of composition, decomposition and constituents
of extended imputations can again be defined by (44:l)(44:4) of 44.2.1.
As pointed out in 44.4.2. the technical purpose of our extending the concept
of imputation is now fulfilled. Decomposition as well as composition can
now always be carried out.
The connection of these concepts with the sets E(e ) and F(eo) is not so
simple; we will deal with it as the necessity arises.
For the composition, decomposition and constituents of sets of extended
imputations the definitions of 44.2.3. can now be repeated literally.
45. Limitations of the Excess. Structure of the Extended Theory
45.1. The Lower Limit of the Excess
45.1. In the setups of 30.1.1. and of 42.4.1. imputations always existed.
It is now different: Either set E(e ), F(e Q ) may be empty for certain e .
Obviously this happens when (44:11*) or (44:12*) of 44.7.2. conflict with
(44:13) eodem and this is clearly the case for
v(7) + eo < v((i))
i
in both alternatives. As the right hand side is equal to v(7) ny by
(42:11) in 42.5.1., this means
(45:1) e < ny
STRUCTURE OF THE EXTENDED THEORY 369
If J(e ) [F(e<>)] is empty, then the empty set is clearly a solution for it
and since it is its only subset, it is also its only solution. 1 If, on the other
hand, E(e ) [F(e )] is not empty, then none of its solutions can be empty.
This follows by literal repetition of the proof of (31 :J) in 31.2.1.
The right hand side of the inequality (45:1) is determined by the game
T; we introduce this notation for it (with the opposite sign, and using (42:11)
in 42.5.1.):
(45:2) rK = ny = v(/)  v((t)).
i
Now we can sum up our observations as follows:
(45: A) If
o < r,,
then E(e Q ), F(e ) are empty and the empty set is their only
solution. Otherwise neither E(e<j) nor F(e ) nor any solution of
either can be empty.
This result gives the first indication, that "too small" values of e
(i.e. e) in the sense of 44.6.1. exist. Actually, it corroborates the quanti
tative estimate of footnote 1 on p. 366.
45.2. The Upper Limit of the Excess. Detached and Fully Detached Imputations
45.2.1. Let us now turn to those values of e (i.e. e), which are "too
large " in the sense of 44.6.1. When does the disorganizing influence of
the magnitude of e, which we there foresaw, manifest itself?
As indicated in 44.6.1., the critical phenomenon is this: The excess
may be too large to be exhausted by the claims which any player in any
imagined coalition can possibly make. We proceed to formulate this idea
in a quantitative way.
It is best to consider the extended imputations a themselves, instead
of their excesses e. Such an a is past any claims which may be made in
any coalition, if it assigns to the players of each (nonempty) set S I
more than those players could get by forming a coalition in F, i.e. if
(45:3) 2v cti > v(S) for every nonempty set S & I.
in S
Comparing this with (30:3) in 30.1.1. shows that our criterion amounts to
demanding that every nonempty set S be ineffective for a .
In our actual deductions it will prove advantageous to widen (45:3)
somewhat by including the limiting case of equality. The condition then
becomes
1 In spite of its triviality, this circumstance should not be overlooked. The text
actually repeats footnote 2 on p. 278.
370 COMPOSITION AND DECOMPOSITION OF GAMES
(45:4) <xi v(S) for every SsL 1
iinS
*
It is convenient to give these a a name. We call the a of (45:3) fully
detached, and those of (45:4) detached. As indicated, the latter concept
will be really needed in our proofs both termini are meant to express that
the extended imputation is detached from the game, i.e. that it cannot be
effectively supported within the game by any coalition.
45.2.2. One more remark is useful:
The only restriction imposed upon extended imputations is (44:13) of
44.7.2.:
(45:5) a t ^ v((i)) for i = 1, , n.
Now if the requirement (45:4) of detachedness is fulfilled and hence
a fortiori if the requirement (45:3) of full detachedness is fulfilled then it is
unnecessary to postulate the condition (45:5) as well. Indeed, (45:5)
is the special case of (45:4) for S = (i).
This remark will be made use of implicitly in the proofs which follow.
45.2.3. Now we can revert to the excesses, i.e. characterize those which
belong to detached (or fully detached) imputations. This is the formal
characterization :
(45 :B) The game F determines a number r 2 with the following
properties:
(45:B:a) A fully detached extended imputation with the excess e
exists if and only if
e > r 2 .
(45:B:b) A detached extended imputation with the excess e exists
if and only if
e ^ r,.
Proof: Existence of a detached a 3 : Let a be the maximum of all v(S),
S Si (so a ^ v(0) = 0). Put ^ = {aj, , aj} = (a, , a }.
Then for every nonempty S S I we have a? ^ a ;> v(5). This is
inS
(45:4), so a is detached.
1 It is no longer necessary to exclude S  , since (45:4) unlike (45:3) is true when
8 . Indeed, then both sides vanish.
J The intuitive meaning of these statements is quite simple: It is plausible that in
order to produce a detached or a fully detached imputation, a certain (positive) minimum
excess is required. r t is this minimum, or rather lower limit. Since the notions
"detached" and "fully detached" differ only in a limiting case (the  sign in (45:4)),
it stands to reason that their lower limits be the same. These things find an exact
expression in (45 :B).
8 Note that it is necessary to prove this! The evaluation which we give here is crude,
for more precise ones cf. (45 :F) below.
STRUCTURE OF THE EXTENDED THEORY 371
Properties of the detached a : According to the above, detached
a = {!, , a n ]
n
exist, and with them their excesses e = a< v(7). By (45:4) (with
ti
8 = 1) all these e are 2> 0. Hence it follows by continuity, that these e have
a minimum e*. Choose a detached a * = {af, , a*} with this excess
e*. 1
We now put
(45:6) r, = e*.
Proof ot (45:B:a), (45:B:b): If V = {ai, , a n } is detached, then by
n .
definition e = on v(7) ^ e*. If a = {i, , a n ) is fully detached,
i 1
then (45:3) remains true if we subtract a sufficiently small 6 > from
each oti. So a' = {on 5, , a n 8} is detached. Hence by defini
n
tion e  nb = (a<  )  v(7) ^ e*, e > e*.
ii
Consider now the detached a * = {af, , a*) with
a*  v(7) = e*.
i
Then (45:4) holds for a*; hence (45:3) holds if we increase each a* by a
$ > 0. So a" = (a* + 5, , a* + 6j is fully detached. Its excess is
n
= (a* + 5)  v(7) = e* + n5. So every e = e* + n, 6 > 0, i.e.
il
every e > e*, is the excess of a fully detached imputation hence a fortiori
of a detached one; and e* is, of course, the excess of a detached imputation
a*.
Thus all parts of (45:B:a), (45:B:b) hold for (45:6).
46.2.4. The fully detached and the detached extended imputations are
also closely connected with the concept of domination. The properties
involved are given in (45 :C) and (45 :D) below. They form a peculiar anti
thesis to each other. This is remarkable, since our two concepts are strongly
analogous to each other indeed, the second one arises from the first one by
the inclusion of its limiting cases.
1 This continuity argument is valid because the = sign is included in (45:4).
372 COMPOSITION AND DECOMPOSITION OF GAMES
(45 :C) A fully detached extended imputation a dominates no other
extended imputation ft .
Proof: If a H 0, then a must possess a nonempty effective set.
(45 :D) An extended imputation a is detached if and only if it is
dominated by no other extended imputation ft .
Proof: Sufficiency of being detached: Let a = {ai, ,a n } be detached.
Assume a contrario ft H a , with the effective set S. Then S is not empty;
< fti for i in S. So 2} < 2} ft ^ v(5) contradicting (45:4).
t in 8 f in 8
Necessity of being detached: Assume that a = {i, , a n } is not
detached. Let S be a (necessarily nonempty) set for which (45:4) fails,
i.e. oti < v(S). Then for a sufficiently small 6 > 0, even
(<* + *) v(S).
tin S
Put = {0i, , ft n \ = {ai + 6, , a n + $}, then always a< < ft
and 5 is effective for ft : ft g v(5). Thus H a .
45.3. Discussion of the Two Limits ri, r t . Their Ratio
45.3.1. The two numbers ri and F 2 , as defined in (45:2) of 45.1.
and in (45 :B) of 45.2.3. are both in a way quantitative measures of the
essentiality of F. More precisely:
(45 :E) If F is inessential, then JF^ = 0, F 2 = 0.
If F is essential, then Fi > 0, F 2 > 0.
Proof: The statements concerning Fi, which is = ny by (45:2) of 45.1.,
coincide with the definitions of inessentiality and essentiality of 27.3., as
reasserted in 42.5.1.
The statements concerning F 2 follow from those concerning Fi, by
means of the inequalities of (45 :F), which we can use here.
45.3.2. The quantitative relationship of Fi and F 2 is charactenzed as
follows:
Always
(45:F)
^rirai
STRUCTURE OF THE EXTENDED THEORY 373
Proof: As we know, Fi and r 8 are invariant under strategic equivalence,
hence we may assume the game T to be zerosum, and even reduced in the
sense of 27.1.4. We can now use the notations and relations of 27.2.
Since Ti = ny, we want to prove that
(45:7) jJLj y g r, * Sfef^ y.
Proof of the first inequality of (45:7): Let a = {ai, , a w ) be
detached. Then (45:4) gives for the (n  l)element set 8 = I  (fc),
n
5) ai ctk = 5) on ^ v(S) = 7, i.e.
1
r
(45:8) ,  * r
n r.
Summing (45:8) over k = 1, . n, gives n ] a a*
il *l
n n n
i.e. (n 1) ^ a ^ 717, ^ a :  y 7. Now v(7) = 0, so e = ^ a*.
Thus e ^ r 7 for all detached imputations; hence Ft ; r 7.
n 1 w 1
Proof of the second inequality of (45:7):
Put a 00 = ^y^7, and ^ = {a?,  , aj } = {a 00 , , a 00 }.
This a is detached, i.e. it fulfills (45:4) for all S fi I. Indeed: Let p be
the number of elements of S. Now we have:
p = Q: 5 = , (45:4) is trivial.
p = 1: S = (t), (45:4) becomes a 00 ^ v((i)),
n 2 . i  i i
i.e. s y ^ ~~~7 which is obvious.
p ^ 2: (45:4) becomes pa 00 ^ v(S), but by (27:7) in 27.2.
vGS) ^ (n  p)7,
n 2
so it suffices to prove pa 00 2> (n p)7 i.e. p ^ 7 ^ (w p)7. This
amounts to p 7 ^ ^7> which follows from p ^ 2.
Thus a is indeed detached. As v(/) = 0, the excess is
e oo na oo = n(n ^" 2) 7 .
Hence ri ^ 7
374 COMPOSITION AND DECOMPOSITION OF GAMES
46.3.3. It is worth while to consider the inequalities of (45 :F) for
n = 1,2,3,4, successively:
n = 1,2: In these cases the coefficient j of the lower bound of the
n
inequality is greater than the coefficient 5 of the upper bound., 1 This
z
may seem absurd. But since F is necessarily inessential for n = 1,2 (cf.
the first remark in 27.5.2.), we have in these cases ri = 0, ri = 0, and
so the contradictions disappear.
1 ^ o
n = 3: In this case the two coefficients T and s coincide:
n I 2
Both are equal to i. So the inequalities merge to an equation:
(45:9) r, = ilrK.
n ^ 4: In these cases the coefficient ^ of the lower bound is definitely
^ o
smaller than the coefficient = of the upper bound. 2 So now the inequal
JL
ities leave a non vanishing interval open for F 2 .
The lower bound F 2 = ^r Fi is precise, i.e. there exists for each
ft L.
n ^ 4 an essential game for which it is assumed. There also exist for each
n ^ 4 essential games with F 2 > r Fi, but it is probably not pos
n 2
sible to reach the upper bound of our inequality, F 2 = ~ l r li The
i
precise value of the upper bound has not yet been determined. We do not
need to discuss these things here any further. 8
45.3.4. In a more qualitative way, we may therefore say that Fi,Fj
are both quantitative measures of the essentiality of the game F. They
measure it in two different, and to a certain extent, independent ways.
Indeed, the ratio F 2 /Fi, which never occurs for n = 1,2 (no essential
games!), and is a constant for n = 3 (its value is ^), is variable with F for
each n ^ 4.
We saw in 45.1., 45.2., that these two quantities actually measure the
limits, within which a dictated excess will not " disorganize " the players,
in the sense of 44.6.1. Judging from our results, an excess e < Fi is
"too small" antl an excess e > F 2 is "too great" in that sense. This
view will be corroborated in a much more precise sense in 46.8.
1 They are ,  } for n  1 ; 1, for n 2. Note also the paradoxical values <* and
i!
1 ti 2
s __ < . means 2 < (n l)(n 2) which is clearly the case for all n 4.
1 For n 4 our inequality is J ri ^ ri ri. As mentioned above, we know an
itial game with r 8  J ri and also one with r  i ri.
STRUCTURE OF THE EXTENDED THEORY 375
46.4. Detached Imputations and Various Solutions.
The Theorem Connecting (e ), F(e )
45.4.1. (44:E:c) in the definition of a solution in 44.7.3. and our result
(45 :D) in 45.2.4. give immediately:
(45 :G) A solution V for E(e^ [F(e )] must contain every detached
extended imputation of E(e<>) [F(e )].
The importance of this result is due to its role in the following considera
tion.
After what was said at the beginning of 44.7.2. about the roles of E(e^)
and F(e ), the importance of establishing the complete interrelationship
between these two cases will be obvious. I.e. we must determine the
connection between the solutions for E(e ) and F(e ).
Now the whole difference between E(e Q ) and F(e Q ) and their solutions
is not easy to appraise in an intuitive way. It is difficult to see a priori
why there should be any difference at all: In the first case the "gift," made
to the players from the outside, has the prescribed value e , in the second
case it has the prescribed maximum value e . It is difficult to see how the
"outside source/' which is willing to contribute up to e can ever be allowed
to contribute less than e in a " stable " standard of behavior (i.e. solution).
However, our past experience will caution us against rash conclusions in this
respect. Thus we saw in 33.1. and 38.3. that already three and fourperson
games possess solutions in which an isolated and defeated player is not
" exploited " up to the limit of the physical possibilities and the present
case bears some analogy to that.
45.4.2. (45 :G) permits us to make a more specific statement:
A detached extended imputation a belongs by (45 :G) to every solution for
F(e ), if it belongs to F(e ). On the other hand, a clearly cannot belong to
any solution for E(e Q ) if it does not belong to E(e Q ). We now define:
(45:10) D*(e<>) is the set of all detached extended imputations a in
F(e ), but not in E(e ).
So we see: Any solution of F(e ) contains all elements of D*(eo); any solu
tion of E(e<>) contains no element of D*(e ). Consequently F(e ) and E(e Q )
have certainly no solution in common if D*(e ) is not empty.
Now the detached a of D*(e ) are characterized by having an excess
e 6 , but not e = e i.e. by
(45:11) e < e .
From this we conclude:
(45 :H) D*(e<>) is empty if and only if
e, r t .
376 COMPOSITION AND DECOMPOSITION OF GAMES
Proof: Owing to (45 :B) and to (45:11) above, the nonemptiness of
Z>*(e ) is equivalent to the existence of an e with rj g e < e i.e. to
o > rj. Hence the emptiness of D*(e Q ) amounts to e Q ^ ra.
Thus the solutions for F(e ) and for E(e$) are sure to differ, when eo> Fi.
This is further evidence that e<> is "too large " for normal behavior when it is
> r.
45.4.3. Now we can prove that the difference indicated above is the only
one between the solutions for E(e*) and for F(e ). More precisely:
(45:1) The relationship
(45:12) V^W = VuD*(e )
establishes a onetoone relationship between all solutions V
for E(e Q ) and all solutions W for F(e ).
This will be demonstrated in the next section.
46.5. Proof of the Theorem
45.5.1. We begin by proving some auxiliary lemmas.
The first one consists of a perfectly obvious observation, but of wide
applicability :
(45 :J) Let the two extended imputations y = {71, , y n \ and
5 == {81, , d n ] bear the relationship
(45:13) 7, ^ 5, for all i = 1, , n\
then for every a , a H y implies a H 5 .
The meaning of this result is, of course, that (45:13) expresses some
>
kind of inferiority of 6 to y in spite of the intransitivity of domination.
This inferiority is, however, not as complete as one might expect. Thus
one cannot make the plausible inference of y ** ft from 5 H ft , because the
effectivity of a set S for & may not imply the same for y . (The reader
should recall the basic definitions of 30.1.1.)
It should also be observed, that (45 :J) emerges only because we have
extended the concept of imputations. For our older definitions (cf. 42.4.1.)
n n
we would have had 2) 7 = ^ i<; hence y< ^ fl for all i = 1, , n
ti ii
necessitates % = 5, for all i == 1, , n, i.e. y = & .
45.5.2. Now four lemmas leading directly to the desired proof of (45:1).
y >, y ^
(45 :K) If a H ft with a detached and in F(e ) and ft in/?(e ), then
there exists an a ' H ft with a ' detached and in E(e Q ).
STRUCTURE OF THE EXTENDED THEORY 377
Proof: Let S be the set of (30:4:a)(30:4:c) in 30.1.1. for the domination
a H ft . S = / would imply a { > ft for all i = 1, , n so
,  v(J) > ft  v(/).
n n
But as o is in F(e ) and ft in U(e ), so % o  v(7) g e = ft v(7),
contradicting the above.
So S T* 7. Choose, therefore, an t' = 1, , n, not in S. Define
a' = {a'j, , a' n } with
A ~*
choosing e ^ so that 2) < ~~ V CO = g o. Thus all o{ ^ ; hence a '
is detached and it is clearly in (e ). Again, as a( = on for i 5^ t'o hence
for all i in S, so our a H /3 implies a ' H /3 .
(45 :L) Every solution W for F(e ) has the form (45:12) of (45:1)
for a unique V s E(eo). 1
Proof: Obviously the V in question if it exists at all is the intersection
W n E(ev), so it is unique. In order that (45:12) should hold for
we need only that the remainder of W be equal to Z)*(eo), i.e.
(45:14) W  E(e<>) = D*(e ).
Let us therefore prove (45:14).
Every element of D*(e<>) is detached and in F(e ) so it is in W by (45 :G).
Again, it is not in E(e*\ so it is in W  E(e<>). Thus
(45:15) W  E(e)
If also
(45:16) W(eo)D*(eo),
then (45:15), (45:16) together give (45:14), as desired. Assume therefore,
that (45:16) is not true.
Accordingly, consider an a = {i, , a n  in W E(eJ) and not
n
in D*(e ). Then a is in F(e ), but not in E(e<>), so a<  v(J) < e . As
i
1 We do not yet assert that this V is a solution for f( t ) that will come in (45:M).
378 COMPOSITION AND DECOMPOSITION OF GAMES
a is not in >*(e ), this excludes its being detached. Hence there exists a
nonempty set 8 with a < v(S).
tin S
Now form a ' = {&(,  , a' n } with
a'i = cti + c for i in S,
a< = a< for i not in S,
n
choosing > so that still <*(  v(7) g e and ; ^ v(S). So a '
\ I in S
is in F(e ). If it is not in W, then (as W is a solution for F(e Q )) there exists
>>> > >
a ft in W with ft H a'. As all ^ , this implies ft H a by (45 :J).
This is impossible, since both ft , a belong to (the solution) W. Hence a '
must be in W. Now ; > a t for all i in S, and <*< ^ v ( 5 ) So
tin S
a ' H a . But as both a ', a belong to (the solution) W, this is a contra
diction.
(45 :M) The V of (45 :L) is a solution for E(e ).
Proof: V c E(e ) is clear, and V fulfills (44:E:a) of 44.7.3. along with W
(which is a solution for F(e Q )), since V W. So we need only verify
(44:E:b) of 44.7.3.
Consider a in #(<)), but not in V. Then /3 is also in F(e Q ) but not in
W hence there exists an a in W with a. H ft (W is a solution forF(e )!).
>
If this a belongs to E(e ), then it belongs to W n E(e Q ) = V, i.e. we have
an a in E(e Q ) with a H ft .
If a does not belong to E(e$), then it belongs to W #(e ) = D*(e ),
and so it is detached. Thus a H ft , a detached and in f(eo). Hence there
exists by (45 :K) an a ' H , a ' detached and in E(e<>). By (45 :G) this a '
belongs to W, (E(eo) sF(e ), W is a solution for F(e Q )\); hence it belongs
to W n E(e Q ) = V. So we have an a ' in E(e Q ) with a ' * .]
Thus (44:E:b) of 44.7.3. holds at any rate.
(45:N) If V is a solution for JZ(<* ), then the W of (45:12) in (45:1)
is a solution for F(e ).
Proof: WsF^o) is clear, so we must prove (44:E:a), (44:E:b) of 44.7.3.
Ad (44 :E:a): Assume a H ft for two a , ft in W. a 8 ^3* and (45 :D)
exclude that ft be detached. So is not in D*(e ), hence it is in
W  D*(6 ) = V.
STRUCTURE OF THE EXTENDED THEORY 379
Hence a H ft excludes that a too be in (the solution) V. So a is in
W  V = D*(o).
Consequently a is detached.
Now (45 :K) produces an a ' H ft which is detached and in E(e Q ). Being
>  ^
detached, a' belongs by (45 :G) to (the solution for E(e Q )) V. As a ', ft
both belong to (the solution) V and a ' H ft , this is a contradiction.
Ad (44:E:b): Consider a T = {0i, , ftn] in F(c ), but not in W.
Now form ft (c) = {^(e), , n (e)} = {fr + t, , n + } for every
6^0. Let increase from until one of these two things occurs for the
first time:
(45:17) (0 is in
(45:18) "/?() is detached. 2
We distinguish these two possibilities:
(45:17) happens first, say for c = ei ^ 0: ft (ci) is in E(e Q ), but it is not
detached.
If 1 = 0, then ft = ft (0) is in E(e Q ). As ft is not in V W, there
>
exists an a H in (the solution for E(e<>)) V. A fortiori a in W.
Assume next *i > 0, and (ci) in V. As ft (ei) is not detached, there
exists a (nonempty) S c / with ^ ft(ei) < v(S). Besides, always
tin S
fti(ci) > fti. So ft (ci) H . And ft (1) is in V, hence a fortiori in W.
Assume, finally, *i > and ft (*0 not in V. As ft (i) is in E(e<>) t there
exists an a ^ ( l ) in (the solution for E(e*)) V. Since always ft i(i) > ft,
a H (ei) implies a ^ ft by (45 :J). And a is in V, hence a fortiori in W.
(45:18) happens first, or simultaneously with (45:17), say for = ej ^ 0:
ft (j) is still in F(e ), and it is detached.
If 7(s) is in J(e ), then it is by (45 :G) in (the solution for ()) V.
If 6 (e 2 ) is not in #(e ), then it is in D*(e ). So ft (* 2 ) is at any rate in W.
1 I.e. the excess of ft () is  c . For (0)  is in F(c ), i.e. its excess b e ,
*
and the excess of (e) increases with ,
I.e. 5) 0,(e) vOS)forallS7. Each ft (t) increases with c.
% in S in S
380 COMPOSITION AND DECOMPOSITION OF GAMES
This excludes c s == 0, since ft = ft (0) is not in W. So 61 > 0.
For < < i, ft (e) is not detached, so there exists a nonempty 8 /
with 2 0*W < V OS) Hence there exists by continuity a nonempty
S / even with &() v(S). Besides, always ftfo) > ft, hence
(i) H  And () belongs to W.
Summing up: In every case there exists an a H ft in W. (This a was
* *
a , ft (ci), a , (ej) above, respectively.) So (44:E:b) is fulfilled.
We can now give the promised proof:
Proof of (45:1): Immediate, by combining (45 :L), (45:M), (45:N).
45.6. Summary and Conclusions
46.6.1. Our main results, obtained so far, can be summarized as follows:
(45:0) If
(45:O:a) e < ri,
then U(e ), F(e Q ) are empty and the empty set is their only
solution.
If
(45:0:b) ri e ^ r, f
then J5(e ), ^(^o) are not empty, both have the same solutions,
which are all not empty.
If
(45:0:c) e Q > r,,
then E(eo), F(e ) are not empty, they have no solution in common,
all their solutions are not empty.
Proof: Immediate by combining (45: A), (45:1) and (45 :H).
This result makes the critical character of the points e<> = Fi, r*
quite clear and it further strengthens the views expressed at the end of
45.1. and following (45 :H) in 45.4.2. concerning these points: That it is here
where 60 becomes "too small" or "too large " in the sense of 44.6.1.
46.6.2. We are also able now to prove some relations which will be useful
later (in 46.5.).
(45 :P) Let W be a nonempty solution for F(e ), i.e. assume that
eo ri. Then
(45:P:a) Max. e( O = e
*" iii iff
DETERMINATION OF ALL SOLUTIONS 381
(45:P:b) Min ^ w e (7)  Min (., r,).
Also
(45:P:c) Max* . e(~a)  Min . e(a) = Max (0, e  r,).
** in TT ot in iff
Proof: (45:Prc) follows from (45:P:a), (45:P:b) since
e  Min (e , T 2 ) = Max (e  e , e  r 2 ) = Max (0, e  r 2 ).
We now prove (45:P:a), (45:P:b).
Write W = VuD*(e ), V a solution for J0(e ), following (45:1). As
e Q ^ ri, so Vis not empty (by (45: A) or (45:0)). As we know e( a) = e
throughout V and e( a) < e throughout D*(e ).
Now for e ^ T 2 , D*(6 ) is empty (by (45 :H)), so
(45:19) Max* . e(a) = Max* . w e(a) = ,
in W a m V
(45:20) Min . e(T) = Min . w e("^) = .
in w a m V
And for e Q > T 2 , D*(e ) is not empty (again by (45:H)), it is the set of all
>
detached a with e( a ) < e . Hence by (45:B:b) in 45.2.3. these e( a ) have
a minimum, T 2 . So we have in this case:
(45:19*) Max. w/ e(2 ) = Max. w e(2) = ,
in W in V
(45:20*) Min 7 ^ w e() = Min. fa ^^ e() = r,.
(45:19), (45:19*) together give our (45:P:a), and (45:20), (45:20*) give
together our (45:P:b).
46. Determination of All Solutions in a Decomposable Game
46.1. Elementary Properties of Decompositions
46.1.1. Let us now return to the decomposition of a game T.
Let F be decomposable for J, K( I J) with A, H as its J, ICcon
stituents.
Given any extended imputation a = (ai, , <x} for 7, we form its
J, ^constituents ft , 7 (ft = a, for i in J, 7^ = a for i in K), and their
excesses
1 Our assertion includes the claim that these Max %w , and Min . exist.
in W in W
1 Verbally: The maximum excess in the solution W is the maximum excess allowed in
FMi o. The minimum excess in the solution W is again c , unless e > rt, in which
case it is only rt. I.e. the minimum is as nearly e as possible, considering that it must
never exceed r t .
The "width" of the interval of excesses in W ia the excess of e over IrU, if any.
382 COMPOSITION AND DECOMPOSITION OF GAMES
(46:1)
Excess of a. in /: e = e( a ) = ] a
ti
Excess of ft in J: / = /( a ) = J/ a ~~ v (^)>
tin/
Excess of 7 in JRC: g = g( a) = ]? a v(X). 1
tin X
Since
(46:2) v(J) + v(K) = v(7)
(by (42:6 :b) in 42.3.2., or equally by (41:6) in 41.3.2. with S = J, T = K)
therefore
(46:3) e=f + g
(46 :A) We have
(46:A:a) ri = Ai + Hi,
(46:A:b) r 2 = A, + JHJ 2 .
(46 :A :c) F is inessential if and only if A, H are both inessential.
Proof: Ad (46:A:a): Apply the definition (45:2) in 45.1. to T, A, H ir
turn.
(46:4) ri = v(7)  v((t)),
tin /
(46:5) Ai = v(J)  v((i)),
t in J
(46:6) H, = v(K)  % v((i)).
Comparing (46:4) with the sum of (46:5) and (46:6) gives (46:A:a), owing t<
(46:2).
Ad (46:A:b): Let ~2, ~P , y be as above (before (46:1)). Then a i:
detached (in /) if
2) a, S v(R) for all R S 7.
tin R
Recalling (4t:6) in 41.3.2. we may write for this
/ A G. f 7\ \^ \ \^ "^ /C\ I ,~/fTi\ **.* 11 C r~ T H^ r V
(4o:7; 2/ a i 2/ a * v w) + v \^ ) * or a " o /, 1 IK.
t in S t in r
Again , 7 are detached (in J, X) if
(46:8) % a, ^ v(S) for all SsJ,
iin S
(46:9) J) a, ^ v(T) for all TsX.
tinT
1 Up to this point it was not necessary to give explicit expression to the dependenc
of a 's excess e upon a . We do this now.for e as well as for/, g.
DETERMINATION OF ALL SOLUTIONS 383
Now (46 :7) is equivalent to (46 :8) , (46 :9) . Indeed : (46 :7) obtains by adding
(46:8) and (46:9); and (46:7) specializes for T = to (46:8) and for S = 9
to (46:9).
Thus a is detached, if and only if its (/, K) constituents ft , 7 are both
detached. As their excesses e and /, g are correlated by (46:3), this gives
for their minima (cf. (45:B:b))
r, = A 2 + H 2 ,
i.e. our formula (46:A:b).
Ad (46:A:c): Immediate by combining (46:A:a) or (46:A:b) with (45 :E)
as applied to F, A, H.
The quantities ri, F 2 are both quantitative measures of the essentiality
of the game F, in the sense of 45.3.1. Our above result states that both are
additive for the composition of games.
46.1.2. Another lemma which will be useful in our further discussions:
> >
(46 :B) If a H ft (for F), then the set S of 30.1.1. for this domination
can be chosen with S J or S K without any loss of generality. l
Proof: Consider the set S of 30.1.1. for the domination a H ft. If
accidentally S Si J or S K, then there is nothing to prove, so we may
assume that neither S J nor S K. Consequently S = Si u Ti, where
Si /, Ti S K, and neither Si nor T\ is empty.
We have a > ft for all i in S, i.e. for all i in Si, as well as for all i in T\.
Finally
2 at ^ v(S).
tin 8
The left hand side is clearly equal to ^ a + ^ a, while the right hand
* in Si t in TI
side is equal to v(Si) + v(Ti) by (41:6) in 41.3.2. Thus
i in Si
hence at least one of
in S l t in 2*,
must be true.
>
Thus of the three conditions of domination in 30.1.1. (for a ^ ft )
(30:4:a), (30:4:c) holds for both of Si, TI and (30:4:b) for at least one of
them. Hence, we may replace our original S by either Si (s J) or Ti(fi K").
This completes the proof.
1 I.e. this extra restriction on S does not (in this case!) modify the concept of
domination.
384 COMPOSITION AND DECOMPOSITION OF GAMES
46.2. Decomposition and Its Relation to the Solutions: First Results Concerning F(e )
46.2.1. We now direct our course towards the main objective of this
part of the theory: The determination of all solutions U/ of the decomposable
game T. This will be achieved in 46.6., concluding a chain of seven lemmas.
We begin with some purely descriptive observations.
Consider a solution U/ for F(e ) of T. If U/ is empty, there is nothing
more to say. Let us assume, therefore, that U/ is not empty owing to
(45: A) (or equally to (45:0)) this is equivalent to
Using the notations of (46:1) in 46.1.1. we form:
Max. ./() = *>,
a m U/
Min* . , , /( a ) = p,
in U/
(46:10)
'Max. flf() = ft
m U/
inU,^ )= *'
1 That all these quantities can be formed, i.e. that the maxima and the minima in
question exist and are assumed, can be ascertained by a simple continuity consideration.
Indeed /( a ) 2/ a ~~ v (^) and 0r( a ) = 2^ a v(K) are both continuous
' in J i in X
functions of a , i.e. of its components 01, , . The existence of their maxima and
minima is therefore a well known consequence of the continuity properties of the domain
of a the set U/.
For the reader who is acquainted with the necessary mathematical background
topology we give the precise statement and its proof. (The underlying mathematical
facts are discussed e.g. by C. Carathtodory, loc. cit., footnote 1 on p. 343. Cf. there
pp. 136140, particularly theorem 5).
U/ is a set in the ndimensional linear space L n (Cf. 30.1.1.). In order to be sure
that every continuous function has a maximum and a minimum in U/, we must know that
U/ is bounded and closed.
Now we prove:
(*) Any solution U for F(e ) [E(e Q )] of an nperson game r is a bounded and closed
set in L n .
Proof: Boundedness: If a {on, , <x) belongs to U, then every a, 2> v((i))
n
and 2/ ~ v(7) =s o, hence a, ^ v(7) + e Q 2} / ^ V CO + *o ^ v((t)).
So each is restricted to the fixed interval
and so these a form a bounded set.
Closedness: This is equivalent to the openness of the complement of U. That set is,
> >
by (30:5:c) in 30.1.1., the set of all which are dominated by any a of U* (Observe
DETERMINATION OF ALL SOLUTIONS 385
Given two a = {!,  , a n j, ft = {ft, , n } there exists a
> >
unique 7 = {71, , y n } which has the same Jcomponent as a , and
the same JCcomponent as ft :
(4Q.H) 7. = for tin/,
7 = ft for i in K.
46.2.2. We now prove:
(46 :C) If ^T, V belong to U/, then the V of (46:11) belongs to U/
if and only if
(46:C:a) /(*)+ 0(7) S o.
Incidentally
(46:C:b) 6(7) =/U)+0(7).
Proo/: Formula (46:C:b): By (46:3) in 46.1.1. e(V) = /(V) + 0(V),
and clearly /( 7 )=/(), j( 7 ) = fif(7)
*
Necessity of (46:C:a): Since U/fif^o), therefore e(y) ^ e is neces
sary and by (46:C:b) this coincides with (46:C:a).
Sufficiency of (46:C:a): y is clearly an extended imputation, along
with a , ft , and (46:C:a), (46:C:b) guarantee that y belongs to F(e ). 1
Now assume that y is not in U/. Then there exists a 8 H 7 in U/.
The set /S of 30.1.1. for this domination may be chosen by (46 :B) with
>
Now clearly 5 H 7 implies, when S J that 6 H a ,
that we are introducing the solution character of U at this point!)
For any a denote the set of all H a by D+. Then the complement of U is
> <*
the sum of all , a of U.
a
Since the sum of any number (even of infinitely many) open sets is again open, it
suffices to prove the openness of each D+, i.e. this: If ft ^ a , then for every ft '
a
which is sufficiently near to ft , we have also ft ' H a . Now in the definition of dom
* > >
ination, ft ^ a by (30:4:a)(30:4:c)in30.1.1., appears in the condition (30:4 :c) only.
And the validity of (30:4:c) is clearly not impaired by a sufficiently small change of 0<,
since (30:4 :c) is a < relation.
(Note that the same is not true for a , because a appears in (30:4:b) also, and
(30:4:b) might be destroyed by arbitrary small changes, since (30:4:b) is a ^ relation.
But we needed this property for ft , and not for a !)
1 This is the only use of (46:C:a).
386 COMPOSITION AND DECOMPOSITION OF GAMES
and when S K that d ^ ft . As 5 , a , ft belong to U/, both alterna
tives are impossible.
Hence 7 must belong to U/> as asserted.
We restate (46 :C) in an obviously equivalent form:
(46 :D) Let V/ be the set of all /constituents and Wx the set of all
If constituents of U/.
Then U/ obtains from these V/ and Wx as follows:
U/ is the set of all those y , which have a /constituent a '
in V/ and a /^constituent ft ' in W* such that
(46:12) e( O + e(7') ^ e Q . 1
46.3. Continuation
46.3. Recalling the definition of U/'s decomposability (for/, /) in (44 :B)
in 44.3.1., one sees with little difficulty, that it is equivalent to this:
U/ obtains from the V/, Wx of (46 :D) as outlined there, but without
the condition (46:12).
Thus (46:12) may be interpreted as expressing just to what extent U/
is not decomposable. This is of some interest in the light of what was said
in 44.3.3. about (44:D:a) there.
One may even go a step further: The necessity of (46:12) in (46 :D) is
easy to establish. (It corresponds to (46:C:a), i.e. to the very simple first
two steps in the proof of (46 :C)). Hence (46 :D) expresses that U/ is no
further from decomposability, than unavoidable.
All this, in conjunction with (44:D:b) in 44.3.3., suggests strongly that
V/, WK ought to be solutions of A, H. With our present extensions of all
concepts it is necessary, however, to decide which F(/o), F(go) to take;
/o being the excess we propose to use in /, and go the one in K. 2 It will
appear that the p, # of 46.2.1. are these /o, g<>.
Indeed, we can prove:
(46 :E)
(46:E:a) V/ is a solution of A for
(46:E:b) W/c is a solution of H for
It is convenient, however, to derive first another result:
1 Note that these a ', ' are not the a , ft of (46 :C) they are their /, Kconstit
uents as well as those of y . e( a ; ), e( ft ') are the excesses of a ', ft ' formed in /,
K. But they are equal to /( a ), g( ft ) as well as to /( 7 ), g( y ). (All of this is related
to (46:C)).
* The reader will note that this is something like a question of distributing the given
excess e in / between / and K.
DETERMINATION OF ALL SOLUTIONS 387
(46:F)
(46:F:a) p + = e ,
(46:F:b) $ + } = e .
Note that in (46 :E), as well as in (46 :F), the parts (a), (b) obtain from
each other by interchanging J, A, p, f with K t H, #, $ . Hence it suffices to
prove in each case only one of (a), (b) we chose (a).
>
Proof of (46:F:a): Choose an a in U/ for which /( a ) assumes its maxi
* . *
mum p . Since necessarily e( a ) ^ e > and since by definition g( a. ) ^ ,
therefore (46:3) in 46.1.1. gives
(46:13) $ + $ e .
Assume now that (46:F:a) is not true. Then (46:13) would imply
further
(46:14) 9 + < 6 .
Use the above a in U/ with /( a ) = p, and chose also a in U/ for which
0( ) assumes its minimum . Then /( a ) + 0( ) = 9 + ^^00 (by
(46:13) or (46:14)). Thus the V of (46 :C) belongs to U/, too, Again
(46 :C) together with (46:14) gives
e(y)  /(") + /(7) = 9 + * < o,
n
i.e. ^ 7i < v(7) + e . Now define
ti
5 = {i, ,} = {71 + >'> 7 + c),
n ^
choosing > so that 5< = v(/) + e . Thus 6 belongs to F(e ).
i
If 6 did not belong to U/, then an TJ H. 5 would exist in U/. By
> ^ >
(45 :J) t? H 7 , which is impossible, since y , 7 are both in U/. Hence
7 belongs to U/. Now 8  v(J) > ^ 7 ~ v(J) = ^ a,  v(J),
t in J iinJ i in /
i.e. /( d ) > /( a ) = ?, contradicting the definition of p.
Consequently (46:F:a) must be true and the proof is completed.
Proof of (46:E:a) : If a ' belongs to V/j> then it is the Jconstituent of an
7 of U/. Hence (cf. footnote 1 on p. 386) e( a ') =* /( a ) g ?, so that a '
belongs to F($). Thus V/
388 COMPOSITION AND DECOMPOSITION OF GAMES
So our task is to prove (44:E:a), (44:E:b) of 44.7.3.
>
Ad (44:E:a): Assume, that a ' H ft ' happened for two a ', ' in V/.
> > > > >
Then a ', ' are the Jconstituents of two 7 , 5 in U/. But a ' H '
>
clearly implies 7 *< 6 , which is impossible.
Ad (44:E:b) : Consider an a ' in F(<p) but not in V/. Then by definition
>
e( a ') g . Use the in U/ mentioned in the above proof of (46:F:a),
for which g( ) = . Let ' be the jfiCconstituent of this ft , so that
~ft' is in W* and e(T') = gCft) = f Thus 6(T') + e(7') ^ ^ + i = e
(use (46:F:a)). Form the y (for /), which has the J, ^constituents
"^', T' Then e( 7 ) = e( a ') + e(]*') e Q i.e. ~y belongs to F(e Q ).
y does not belong to U/ because its /constituent a ' does not belong
to V/. Hence, there exists a d H y in (the solution for F(e )) U/.
Let S be the set of 30.1.1. for the domination 5 H y . By (46 :B) we
may assume that S s J or S c K.
Assume first that S K . As y has the same Jfconstituents ft ' as
ft , we can conclude from d H 7 that 5 H . Since both 5 ,
belong to U/, this is impossible.
Consequently SsJ. Denote the Jconstituent of d by 6'; as 6
belongs to U/, therefore 6 ' belongs to V/. 7 has the Jconstituent a '.
Hence we can conclude from 6 H 7 that 5 ' H a '.
Thus we have the desired 6 ' from V/ with 5 ' H a '.
46.4. Continuation
46.4.1. (46 :D), (46 :E) expressed the general solution U/ of T in terms
of appropriate solutions of V/, Wx of A, H. It is natural, therefore, to
try to reverse this procedure: To start with the V/, W* and to obtain U/.
It must be remembered, however, that the V/, W* of (46 :D) are not
entirely arbitrary. If we reconsider the definitions (46:10) of 46.2.1. in
this light of (46 :D), then we see that they can also be stated in this form:
(46:15)
DETERMINATION OF ALL SOLUTIONS 389
And (46 :F) expresses a relationship of these p, tf>, #, which are determined
by V/, W/c with each other and with e .
46.4.2. We will show that this is the only restraint that must be imposed
upon the V/, W*. To do this, we start with two arbitrary nonempty
solutions V/i W/c of A, H (which need not have been obtained from any
solution U/ of T), and assert as follows:
(46 :G) Let V/ be a nonempty solution of A for F($) and Wx a non
empty solution of H for F(#). Assume that p, ^ fulfill (46:15)
above, and also that with the ^>, ^ of (46:15)
(46:16) + = ? + = eo .
>
For any a ' of V/ and any ft ' of W/c with
(46:17) (O+e(7') 60,
form the 7 (for /) which has the 7, ^components a ', '.
>
Denote the set of all these 7 by U/.
The U/ which are obtained in this way are precisely all
solutions of F for F(e ).
Proof: All U/ of the stated character are obtained in this way: Apply
(46 :D) to U/ forming its V/, W/c. Then all our assertions are contained in
(46:D), (46:E), (46:F) together with (46:15).
All U/ obtained in this way have the stated character: Consider an U/
constructed with the help of V/, W/c as described above. We have to
prove that this U/ is a solution T for F(e Q ).
For every 7 of U/ our (46:17) gives e( y ) = e( a ') + e( ft ') ^ e , so
that belongs to F(e<>). Thus U; <= F(
So our task is to prove (44:E:a), (44:E:b) of 44.7.3.
Ad (44:E:a): Assume that rj H y happened for two i? , y in U/.
Let a ', ' be the /, Kconstituents of 7 and 6 ', 6 ' the J, Xcon
stituents of 77 from which they obtain as described above. Let S be the
set of 30.1.1. for the domination y H y . By (46 :B) we may assume that
> >
S c J or S Si K. Now S c J would cause T; H 7 to imply 5 ' H a ',
which is impossible, since 6 ', a ' both belong to V/; and S X would
cause if H 7 to imply ' H /3 ' which is impossible, since e ', ' both
belong to W*.
>
Ad (44:E:b): Assume per absurdum, the existence of a 7 in F(e ) but
>
not in U/, such that there is no 17 of U/ with 17 H 7 . Let a ', ' be the
/, Itconstituents of 7
390 COMPOSITION AND DECOMPOSITION OF GAMES
Assume first e( a ') ^ <f>. Then a' belongs to F(p). Consequently
either a ' belongs to V/ or there exists a 6 ' in V/ with 5 ' H a '. In the
latter case choose an 6 ' in Wx for which e( ') assumes its minimum value .
Form the 17 with the /, /^constituents 5 ', '. As 6 ', e ' belong to
V/, W*, respectively, and as e( d ') + e(* ') g + = e , therefore i?
belongs to U/. Besides T; H y owing to 6 ' H a ' (these being their J
constituents). Thus 17 contradicts our original assumption concerning y .
Hence we have demonstrated, for the case under consideration, that a '
must belong to V/.
In other words:
(46:18) Either ~' belongs to V/, or e(7') > ?.
Observe that in the first case necessarily e( a ') ^ <f>, and of course
in the second case e( a ') > ^ <p. Consequently:
(46:19) At any rate e( a. ') ^ ^>.
Interchanging J and K carries (46:18), (46:19) into these:
(46:20) Either ~f$ r belongs to Wx or e( V') > #
(46:21) At any rate e(~^ f ) ^ ^.
Now if we had the second alternative of (46:18), then this gives in con
junction with (46:21)
e(7) = e( O + e(7') > ^ + ^ = c ,
which is impossible, as y belongs F(ed). The second alternative of (46:20)
is equally impossible.
Thus we have the first alternatives in both (46:18) and (46:20), i.e.
a ', ft ' belong to V/, W*. As y belongs to F(e ), therefore
e(~O + e(7 ; ) = e(7) ^ e .
Consequently 7 must belong to U/ contradicting our original assumption.
46.5. The Complete Result in F(e Q )
46.5.1. The result (46 :G) is, in spite of its completeness, unsatisfactory
in one respect: The conditions (46:16) and (46:17) on which it depends are
altogether implicit. We will, therefore, replace them by equivalent, but
much more transparent conditions.
DETERMINATION OF ALL SOLUTIONS 391
To do this, we begin with the numbers <p, $ which we assume to be given
first. Which solutions V/, W* of A, H for F(p), F(#) can we then use in
the sense of (46 :G)?
First of all, V/, W* must be nonempty; application of (45: A) or (45:0)
to A, H (instead of F) shows that this means
(46:22) * Ai, # Hi.
Consider next (46:15). Apply (45:P) of 45.6.1. to A, H (instead of T).
Then (45:P:a) secures the two Maxequations of (46:15), while (45:P:b)
transforms the two Minequations of (46:15) into
(46:23) $ = Min (?, A 2 ), * = Min (}, H 2 ).
Let us, therefore, define v>, \l/ by (46:23).
Now we express (46:16), i.e.
(46:16) + ^ = _<, + = e .
The first equation of (46:16) may also be written as
 <P = #  ^,
i.e. by (46:23)
(46:24) Max (0, ?  A 2 ) = Max (0, #  H 2 ). 1
46.5.2. Now two cases are possible:
Case (a): Both sides of (46:24) are zero. Then in each Max of (46:24)
the 0term is ^ than the other term, i.e. <p A 2 ^ 0, # H 2 ^ 0, i.e.,
(46:25) ? ^ A 2 , ^ H,.
Conversely: If (46:25) holds, then (46:24) becomes = 0, i.e. it is auto
matically satisfied. Now the definition (46 :23) becomes
(46:26) ? = v, ^ = ^,
and so the full condition (46:16) becomes 2
(46:27) ? + * = e Q .
(46:25) and (46:27) give also
(46:28) e g A 2 + H 2 = r 2 .
Case (b): Both sides of (46:24) are not zero. Then in each Max of
(46:24) the 0term is < than the other term i.e. ?  A 2 > 0,#  H, > 0,
i.e.
(46:29) ? > A 2 , # > H 2 .'
1 Cf. (45:P:c) and its proof.
1 Of which we used only the first part to obtain (46:24), on which this discussion is
based.
1 Note that the important point is that (46:25), (46:29) exhaust all possibilities i.e.
that we cannot have * A a , $ > H 2 , or * > Ai, ? Hj. This is, of course, due to
the equation (46:24), which forces that both sides vanish or neither.
The meaning of this will appear in the lemmas which follow.
392 COMPOSITION AND DECOMPOSITION OF GAMES
Conversely: If (46:29) holds, then (46:24) becomes $  A 2 = # H,
i.e. it is not automatically satisfied. We can express (46:24) by writing
(46:30) ? * Ai + , # = H 2 + o>,
and then (46:29) becomes simply
(46:31) co > 0.
Now the definition (46:23) becomes
(46:32) ? = A 2 , * = H 2 ,
and so the full condition (46:16)* becomes
A 2 + Ht + co = 6 ,
i.e.
(46:33) e = r 2 + w.
(46:31) and (46:33) give also
(46:34) 6 > F 2 .
46.6.3. Summing up :
(46 :H) The conditions (46:16), (46:17) of (46 :G) amount to this:
One of the two following cases must hold:
Case (a): (1) ri e r,
together with
(2) AI*S A,,
[(3) Hi**SH,,
and
(4) 9 + * = .
Case (b): (1) e, > r,,
together with
(2) P > Ai,
(3) * > H J(
and
(4) 6  r, = 9  At = * ~ H,.
Proo/: Case (a): We knew all along, that 6 ^ ri and <f> ^ Ai,
^ ^ Hi. The other conditions coincide with (46:28), (46:25), (46:27)
which contain the complete description of this case.
Case (b): These conditions coincide with (46:34), (46:29), (46:30),
(46:33) which contain the complete description of the case (after elimination
of w which subsumes (46:31) under (l)(3)).
1 Cf. footnote 2 on p. 391.
1 The reader will note that while (l)(3) for (a) and for (b) show a strong analogy,
the final condition (4) is entirely different for (a) and for (b). Nevertheless, all this was
obtained by the rigorous discussion of one consistent theory!
More will be said about this later.
DETERMINATION OF ALL SOLUTIONS 393
46.6. The Complete Result in E(e )
46.6. (46 :G) and (46 :H) characterize the solutions of T for F(e ) in a
complete and explicit way. It is now apparent, too, that the cases (a), (b)
of (46 :H) coincide with (45:0:b), (45:0:c) in 45.6.1.: Indeed (a), (b) of
(46 :H) are distinguished by their conditions (1), and these are precisely
(45:O:b), (45:O:c).
We now combine the results of (46 :G), (46 :H) with those of (45:1),
(45:0). This will give us a comprehensive picture of the situation, utilizing
all our information.
(46:1) If
(46:I:a) (1) eo < ri,
then the empty set is the only solution of r, for E(e ) as well as
for F(e 9 ).
If
(46:1 :b) (1) ri ^ e r,,
then T has the same solutions O/ for E(e Q ) and for F(e ). These
O/ are precisely those sets, which obtain in the following manner:
Choose any two p, # so that
(2) Ai ^ p A 2 ,
(3) Hi ^* g H 2 ,
and
(4) ? + # = eo.
Choose any two solutions V/, W* of A, H for JE(p), E($).
Then U/ is the composition of V/ and WK in the sense of
44.7.4.
If
(46:I:c) (1) e > r 2 ,
then F does not have the same solutions O/ for E(e^) and U/ for
F(e ). These O/ and U/ are precisely those sets which obtain
in the following manner: Form the two numbers , # with
(2) 9 > Ai,
(3) f > H 2 ,
which are defined by
(4) eo  r 2 = 9  A, = *  H 2 .
Choose any two solutions V/, WK of A, H for E(v), E$).
Then O/ is the sum of the following sets: The composition of
V/ and of the set of all detached 7 ' (in K) with e(ft ') = H a ; the
composition of the set of all detached a ' (in J) with e( a ') = A*
394 COMPOSITION AND DECOMPOSITION OF GAMES
and of WA:; the composition of the set of all detached a ' (in J)
with e( a ') = <p and of the set of all detached ' (in K) with
e( ft ') = ^, taking all pairs ^>, ^ with
(5) A 2 < *, < 9, Hi < * < *,
and
(6) ? + ^ = e .
U/ obtains by the same process, only replacing the condition (6)
by
(7) ? + * g 6 .
Proo/: Ad (46:I:a): This coincides with (45:O:a).
Ad (46:I:b): This is a restatement of case (a) in (46 :H) except for the
following modifications:
First: The identification of the E and F solutions for F, A, H. This is
justified by applying (45:O:b) to F, A, H which is legitimate by (1), (2), (3)
of (46:I:b).
Second: The way in which we formed O/ = U/ from V/ = V/, W/c = Wx
which differed from the one described in (46 :H) insofar as we omitted
the condition (46:17). This is justified by observing that (46:17) is
automatically fulfilled: V/ = Vj #(?), W* = W* E($), hence for
a ' in V/ and ft ' in W* always e( a. ') = 9, e( ft ') = # and so by (4)
e( O + (7') = *o.
Ad (46:I:c): This is a restatement of case (b) in (46 :H), except for this
modification:
We consider both E and F solutions for F (not only F solutions as in
(46:H)), and use only E solutions for A, H (not F solutions as in (46:H)).
The way in which the former O/, U/ of F are formed from the latter (Vj of
A, Wx of H) is accordingly different from the one described in (46 :H).
In order to remove these differences, one has to proceed as follows:
Apply (45:1) and (45:O:c) to F, A, H which is legitimate by (1), (2), (3)
of (46:I:c). Then substitute the defining for the defined in (46 :H). If these
manipulations are carried out on (46 :H) (in the present case (46:I:c)), then
precisely our above formulation results. l
46.7. Graphical Representation of a Part of the Result
46.7. The results of (46:1) may seem complicated, but they are actually
only the precise expression of several simple qualitative principles. The
reason for going through the intricacies of the preceding mathematical
derivation was, of course, that these principles are riot at all obvious, and
that this is the way to discover and to prove them. On the other hand our
result can be illustrated by a simple graphical representation.
1 If the reader carries this out, he will see that this transformation, although somewhat
cumbersome, presents absolutely no difficulty.
DETERMINATION OF ALL SOLUTIONS
395
We begin with a more formalistic remark.
A look at the three cases (46:I:a)(46:I:c) discloses this: While nothing
more can be said about (46:I:a), the two other cases (46:I:b), (46:I:c)
have some common features. Indeed, in both instances the desired solu
tions O/, U/ of F are obtained with the help of two numbers , # and certain
corresponding solutions V/, W* of A, H. The quantitative elements of the
representation of O/, U/ are the numbers , #. As was pointed out in foot
note 2 on p. 386, they represent something like a distribution of the given
excess e in / between J and K.
Figure 69.
, # are characterized in the cases (46:I:b) and (46:I:c) by their respec
tive conditions (2)(4). Let us compare these conditions for (46:I:b) and
for (46:I:c).
They have this common feature : They force the excesses , # to belong
to the same case of A, H as the one to which the excess e belongs for r.
They differ, however, very essentially in this respect: In (46:I:b) they
impose only one equation upon , # while in (46 :1 :c) they impose two equa
tions. 1 Of course, the inequalities too, may degenerate occasionally to
equations (cf. (46:J) in 46.8.3.), but the general situation is as indicated.
The connections between e and p, # are represented graphically by
Fig. 69.
1 (2), (3) are inequalities in both cases,
for two equations in (46:I:c).
(4) stands for one equation in (46:I:b) and
396 COMPOSITION AND DECOMPOSITION OF GAMES
This figure shows the p, #plane and under it the 6oline. On the latter
the points ri, ri mark the division into the three zones corresponding
to cases (46:I:a)(46:I:c). The <?>, ^domain which belongs to case (46:I:b)
covers the shaded rectangle marked (b) in the p, #plane; the , ^domain,
which belongs to case (46:I:c) covers the line marked (c) in the p, ^plane.
Given any $, #point, following the line leads to its e value
thus 6, V yield a, a', respectively. Given any e value the reverse process
discloses all its 9, ^points, thus a produces an entire interval at 6, while
a' yields the unique point b'. 1
46.8. Interpretation : The Normal Zone. Heredity of Various Properties
46.8.1. Figure 69 calls for further comments, which are conducive to a
fuller understanding of (46:1).
First: There have been repeated indications (for the last time in the com
ment following (45:0)), that the cases (46:I:a) and (46:I:c), i.e. e <  r k
and 60 > Fj, respectively are the "too small" or "too large" values of e
in the sense of 44.6.1.; i.e., that case (46:I:b), Fi ^ e g r 2 , is in some
way the normal zone. Now our picture shows that when the excess e
of F lies in the normal zone, then the corresponding excesses , # of A,
H lie also within their respective normal zones. 2 In other words:
The normal behavior (position of the excess in (46:I:b)) is hereditary from
T to A, H.
Second: In the case (46:I:b) the normal zone <f>, $ are not completely
determined by eo, as we repeatedly saw before. In case (46:I:c), on the
other hand, they are. This is pictured by the fact that the former domain
is the rectangle (b) in the <p, #plane, while the latter domain is only a line (c).
It is worth noting, howe